diff --git a/.gitattributes b/.gitattributes index 29f963e84530124b17c9c92296b76f15a3a682c3..ab2bf72b2dcb42e5ff9d71af7bcae01264b254fd 100644 --- a/.gitattributes +++ b/.gitattributes @@ -2421,3 +2421,26 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text 2024/XPSR_[[:space:]]Cross-modal[[:space:]]Priors[[:space:]]for[[:space:]]Diffusion-based[[:space:]]Image[[:space:]]Super-Resolution/9aa773ab-828c-4665-9dc9-95d2231056e8_origin.pdf filter=lfs diff=lfs merge=lfs -text 2024/YOLOv9_[[:space:]]Learning[[:space:]]What[[:space:]]You[[:space:]]Want[[:space:]]to[[:space:]]Learn[[:space:]]Using[[:space:]]Programmable[[:space:]]Gradient[[:space:]]Information/b0adb778-7a29-4e26-9c7a-8e25e5eccb8f_origin.pdf filter=lfs diff=lfs merge=lfs -text 2024/You[[:space:]]Only[[:space:]]Learn[[:space:]]One[[:space:]]Query_[[:space:]]Learning[[:space:]]Unified[[:space:]]Human[[:space:]]Query[[:space:]]for[[:space:]]Single-Stage[[:space:]]Multi-Person[[:space:]]Multi-Task[[:space:]]Human-Centric[[:space:]]Perception/4d5bac17-bf95-4309-88c2-e96014fb704f_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/You[[:space:]]Only[[:space:]]Need[[:space:]]One[[:space:]]Step_[[:space:]]Fast[[:space:]]Super-Resolution[[:space:]]with[[:space:]]Stable[[:space:]]Diffusion[[:space:]]via[[:space:]]Scale[[:space:]]Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/ZeST_[[:space:]]Zero-Shot[[:space:]]Material[[:space:]]Transfer[[:space:]]from[[:space:]]a[[:space:]]Single[[:space:]]Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/Zero-Shot[[:space:]]Adaptation[[:space:]]for[[:space:]]Approximate[[:space:]]Posterior[[:space:]]Sampling[[:space:]]of[[:space:]]Diffusion[[:space:]]Models[[:space:]]in[[:space:]]Inverse[[:space:]]Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/Zero-Shot[[:space:]]Detection[[:space:]]of[[:space:]]AI-Generated[[:space:]]Images/6a7701df-63a3-43ae-9803-224606ec44ab_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/Zero-Shot[[:space:]]Image[[:space:]]Feature[[:space:]]Consensus[[:space:]]with[[:space:]]Deep[[:space:]]Functional[[:space:]]Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/Zero-Shot[[:space:]]Multi-Object[[:space:]]Scene[[:space:]]Completion/72685078-1b9b-4a60-bb08-b29f03303447_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/Zero-shot[[:space:]]Object[[:space:]]Counting[[:space:]]with[[:space:]]Good[[:space:]]Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/Zero-shot[[:space:]]Text-guided[[:space:]]Infinite[[:space:]]Image[[:space:]]Synthesis[[:space:]]with[[:space:]]LLM[[:space:]]guidance/b7f3f07b-6122-4084-adc4-821e20de6967_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/ZeroI2V_[[:space:]]Zero-Cost[[:space:]]Adaptation[[:space:]]of[[:space:]]Pre-Trained[[:space:]]Transformers[[:space:]]from[[:space:]]Image[[:space:]]to[[:space:]]Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/ZigMa_[[:space:]]A[[:space:]]DiT-style[[:space:]]Zigzag[[:space:]]Mamba[[:space:]]Diffusion[[:space:]]Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/ZipLoRA_[[:space:]]Any[[:space:]]Subject[[:space:]]in[[:space:]]Any[[:space:]]Style[[:space:]]by[[:space:]]Effectively[[:space:]]Merging[[:space:]]LoRAs/c9a0f3a4-ef1d-4bd3-99ed-57c2d35f2218_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/ZoLA_[[:space:]]Zero-Shot[[:space:]]Creative[[:space:]]Long[[:space:]]Animation[[:space:]]Generation[[:space:]]with[[:space:]]Short[[:space:]]Video[[:space:]]Model/653bba20-f03e-4bb4-94e8-07296b6d7dd9_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/cDP-MIL_[[:space:]]Robust[[:space:]]Multiple[[:space:]]Instance[[:space:]]Learning[[:space:]]via[[:space:]]Cascaded[[:space:]]Dirichlet[[:space:]]Process/165d7b70-a283-4374-8c59-9a8a7bb55138_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/denoiSplit_[[:space:]]a[[:space:]]method[[:space:]]for[[:space:]]joint[[:space:]]microscopy[[:space:]]image[[:space:]]splitting[[:space:]]and[[:space:]]unsupervised[[:space:]]denoising/4d7157e7-b28f-4fe8-88e9-5392223acfe8_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/iHuman_[[:space:]]Instant[[:space:]]Animatable[[:space:]]Digital[[:space:]]Humans[[:space:]]From[[:space:]]Monocular[[:space:]]Videos/8116c30e-fb66-4bd4-8d91-e7e770ee9b0c_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/iMatching_[[:space:]]Imperative[[:space:]]Correspondence[[:space:]]Learning/52436d50-7b7d-4378-ad1a-7282b4224777_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/iNeMo_[[:space:]]Incremental[[:space:]]Neural[[:space:]]Mesh[[:space:]]Models[[:space:]]for[[:space:]]Robust[[:space:]]Class-Incremental[[:space:]]Learning/1e31eb35-6525-408f-be9d-d6461b04908f_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/latentSplat_[[:space:]]Autoencoding[[:space:]]Variational[[:space:]]Gaussians[[:space:]]for[[:space:]]Fast[[:space:]]Generalizable[[:space:]]3D[[:space:]]Reconstruction/22495671-17e7-4373-87cc-ed552a87c60d_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/m&m’s_[[:space:]]A[[:space:]]Benchmark[[:space:]]to[[:space:]]Evaluate[[:space:]]Tool-Use[[:space:]]for[[:space:]]multi-step[[:space:]]multi-modal[[:space:]]Tasks/9f62e0b8-39a3-4698-acd0-6531d74cbb9b_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/milliFlow_[[:space:]]Scene[[:space:]]Flow[[:space:]]Estimation[[:space:]]on[[:space:]]mmWave[[:space:]]Radar[[:space:]]Point[[:space:]]Cloud[[:space:]]for[[:space:]]Human[[:space:]]Motion[[:space:]]Sensing/f1c24061-ee95-4f9d-bbe0-dd04c062492e_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/nuCraft_[[:space:]]Crafting[[:space:]]High[[:space:]]Resolution[[:space:]]3D[[:space:]]Semantic[[:space:]]Occupancy[[:space:]]for[[:space:]]Unified[[:space:]]3D[[:space:]]Scene[[:space:]]Understanding/76f5359c-cecd-4b4b-9031-b1bec94188df_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/uCAP_[[:space:]]An[[:space:]]Unsupervised[[:space:]]Prompting[[:space:]]Method[[:space:]]for[[:space:]]Vision-Language[[:space:]]Models/cf6978ae-7550-49d7-b331-2105baa01ff5_origin.pdf filter=lfs diff=lfs merge=lfs -text +2024/∞-Brush_[[:space:]]Controllable[[:space:]]Large[[:space:]]Image[[:space:]]Synthesis[[:space:]]with[[:space:]]Diffusion[[:space:]]Models[[:space:]]in[[:space:]]Infinite[[:space:]]Dimensions/c437d968-2df6-450e-b551-8dd304e12d9f_origin.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_content_list.json b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..426bd326d78cf6f17e38f9328e93dde827c7faf6 --- /dev/null +++ b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_content_list.json @@ -0,0 +1,1819 @@ +[ + { + "type": "text", + "text": "You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation", + "text_level": 1, + "bbox": [ + 223, + 140, + 779, + 185 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Mehdi Noroozi, Isma Hadji, Brais Martinez, Adrian Bulat, and Georgios Tzimiropoulos", + "bbox": [ + 240, + 213, + 761, + 243 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Samsung AI Cambridge {m.noroozi,isma.hadji}@samsung.com", + "bbox": [ + 369, + 255, + 633, + 282 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. In this paper, we introduce YONOS-SR, a novel stable diffusion based approach for image super-resolution that yields state-of-the-art results using only a single DDIM step. Specifically, we propose a novel scale distillation approach to train our SR model. Instead of directly training our SR model on the scale factor of interest, we start by training a teacher model on a smaller magnification scale, thereby making the SR problem simpler for the teacher. We then train a student model for a higher magnification scale, using the predictions of the teacher as a target during the training. This process is repeated iteratively until we reach the target scale factor of the final model. The rationale behind our scale distillation is that the teacher aids the student diffusion model training by i) providing a target adapted to the current noise level rather than using the same target coming from ground truth data for all noise levels and ii) providing an accurate target as the teacher has a simpler task to solve. We empirically show that the distilled model significantly outperforms the model trained for high scales directly, especially with few steps during inference. Having a strong diffusion model that requires only one step allows us to freeze the U-Net and fine-tune the decoder on top of it. We show that the combination of spatially distilled U-Net and fine-tuned decoder outperforms state-of-the-art methods requiring 200 steps with only one single step. $^{1}$", + "bbox": [ + 261, + 325, + 738, + 616 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 217, + 643, + 375, + 660 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Diffusion models have shown impressive performance in various image generation tasks [22, 42], including image super-resolution (SR) [3, 24, 25, 32]. However, the large number of sequential denoising passes required by the sampling strategy results in extreme computational cost, even for stable diffusion-based models (SD) that operate in the latent space of an autoencoder. Recently, several approaches have been proposed to reduce the number of sampling steps [18, 26, 28, 29]. Unfortunately, such approaches usually compromise performance, especially for the lower number of steps.", + "bbox": [ + 212, + 679, + 787, + 800 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1 The code will be available here once all approvals are processed: https://github.com/SamsungLabs/yonos", + "bbox": [ + 217, + 810, + 785, + 839 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/e0877620c059e60467c9cf464af9e74fd37a5664c311d4207ed7b9f81f8b29cd.jpg", + "image_caption": [ + "Fig. 1: Qualitative comparison for $\\times 4$ and $\\times 8$ magnifications. Each column shows top to bottom LR input image, 1 and 200 step SD-SR, 1-step YONOS-SR(ours). SD-SR represents the standard Stable Diffusion-based SR model. The 1-step SD-SR method lacks quality in terms of detailed textures compared to 200-steps of the same model; see building texture in the first column and hairs in the middle column. In contrast, our method outperforms 200-steps SD-SR with only one step, especially for $\\times 8$ magnification where SD-SR fails to recover the details even with 200 steps. Samples are taken from DIV2K validation set. Images are best seen in a display and zoomed in." + ], + "image_footnote": [], + "bbox": [ + 217, + 176, + 764, + 724 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 127 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Typically, diffusion-based models yield the best results on image patches of similar sizes to those seen during training (e.g. $64 \\times 64$ for SD [22]). On the other hand, super-resolution applications require operating in high-resolution settings, drastically exacerbating the computational issues of diffusion-based models. For example, a SR model that aims for a magnification of $\\times 4$ going from $256 \\times 256$ to $1024 \\times 1024$ requires dividing the input image into 16 patches of $64 \\times 64$ and running the model on each patch individually, making a large number of steps prohibitive for realistic use cases. Using state-of-the-art step-reduction strategy, such as more efficient samplers [18, 19, 28] can partially alleviate this issue but still falls widely short of practical needs. For example, going down to the target of 1 DDIM step results in a significant drop in performance compared to a typical model that does 200 inference steps, as shown in Fig. 1.", + "bbox": [ + 212, + 146, + 787, + 327 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "One differentiating characteristic of the super-resolution task is that it is conditioned on the low-resolution (LR) input image to yield the target high-resolution (HR) image. Unlike the task of text-to-image generation, which relies on text conditioning, the LR image provides closer content to the target HR image, especially at lower scale factors. Therefore, conditioning the diffusion model on the LR image at low-scale factors makes the task inherently simpler for the diffusion model. In this paper, we take advantage of this peculiarity and introduce a novel training strategy dubbed scale distillation. While typical diffusion-based SR methods train the model for super-resolution by conditioning directly on the LR image at the target scale factor, we instead propose a progressive training approach, where we start by training a model for lower scale factors (i.e. where the conditioning signal is closer to the target) and progressively increase to the target scale factor using the previously trained model as a teacher.", + "bbox": [ + 212, + 330, + 787, + 527 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "More specifically, instead of using the raw data to train a model for large scale factors, scale distillation obtains a rich and accurate supervisory signal from a teacher trained for a smaller scale factor. We first train a teacher that takes a less degraded image as input and, therefore, has an easier task to solve during training. Then, we train a model for a larger scale factor as a student while initializing it with the same weights as the teacher, which is now frozen. For a given time step during the training, we feed both teacher and student with the same noisy version of the HR image. However, we condition the teacher with the less degraded LR image (i.e. using the same scale that was used during teacher training), while we condition the student on the target (more degraded) LR image. We then use the teacher's prediction as a target to train the student.", + "bbox": [ + 212, + 531, + 787, + 696 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This training strategy has two direct advantages: i) Unlike typical training where the supervisory signal is somewhat ambiguous as the target is the same for all noise levels, our student receives its target from the teacher and is therefore adaptive to the noise level. ii) The target is more accurate, especially in terms of the finer detail, because the teacher takes a less degraded LR image as input.", + "bbox": [ + 212, + 700, + 787, + 776 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The proposed scale distillation approach allows the model to solve the SR task in fewer steps as we have simplified the task for the student. In fact, we show that models trained with our approach improve significantly when a few steps are used during the inference, e.g. one step, see Fig. 3. Therefore, a direct", + "bbox": [ + 212, + 779, + 787, + 840 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "advantage of the proposed approach is that fine-tuning the decoder directly on top of the diffusion model becomes computationally tractable due to the single inference step required. Taking advantage of this fine-tuning, we show that You Only Need One Step (YONOS)-SR outperforms state-of-the-art diffusion-based SR methods that require a large number (e.g. 200) of inference steps.", + "bbox": [ + 212, + 146, + 782, + 222 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In summary, our contributions are threefold: I) We introduce scale distillation to train SD models with a more accurate and fine supervisory signal for image super-resolution tasks. II) We show that our proposed scale distillation strategy yields more efficient SD models that allow for directly fine-tuning the decoder on top of a frozen one-step diffusion model. III) We show that combining scale distillation followed by decoder fine-tuning yields state-of-the-art results on the SR task, even at high magnification factors, while requiring only one step.", + "bbox": [ + 212, + 222, + 784, + 328 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2 Related work", + "text_level": 1, + "bbox": [ + 215, + 353, + 382, + 369 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Real image super-resolution. Image super-resolution entails restoring a High Resolution (HR) image given its Low Resolution (LR) observation. Solving this task for real images is especially challenging given the dramatic differences in real-world image distributions [10, 11, 17, 38]. These differences arise from varying image degradation processes, different imaging devices, and image signal processing methods, all of which are difficult to properly model and generalize. For this reason, real image super-resolution (or blind super-resolution) has received significant interest among the research community [11, 16, 32-34, 37, 38, 41]. While some methods attempt to learn the degradation process [5, 20, 31, 39], their success remains limited due to the lack of proper large scale training data [17], even while using some unsupervised methods [44]. In contrast, more popular approaches tackle the problem by explicitly modeling the degradation pipeline to create synthetic LR-HR pairs to use for training [15, 27, 34, 41]. Given, the wider success of the explicit degradation modeling approach, we elect to rely on the widely used RealESRGAN degradation pipeline [34] in training our model.", + "bbox": [ + 212, + 386, + 787, + 613 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Diffusion-based super-resolution. Since the early SRCNN [4] method, many deep learning-based solutions for blind super-resolution have been proposed [2, 11, 22, 24, 25, 34, 37, 41, 44]. Early work took advantage of this space by using semantic segmentation probability maps for guiding SR [35]. Most recent methods aim at taking advantage of learned generative priors to simplify the inverse imaging problem of blind image super-resolution. Usually, methods following this paradigm [34, 37, 41] rely on GANs [6]. More recently, diffusion models showed remarkable generative capabilities yielding impressive results across a range of applications [22, 42]. As such, in this paper, we follow several recent works [22, 24, 25, 32] and rely on diffusion-based generative models to tackle the super-resolution problem. While diffusion-based models achieve impressive results, their main shortcoming is the long inference time. Diffusion-based models require several inference steps through the model to yield a final output, thereby limiting their practical use. Therefore, in this paper, we tackle the important", + "bbox": [ + 212, + 628, + 787, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 385, + 126 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "problem of speeding up the inference of diffusion-based super-resolution.", + "bbox": [ + 215, + 146, + 735, + 161 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Guided distillation. Recognizing the inference speed shortcoming of diffusion models, several works have been proposed recently to address this issue [18, 19, 21, 26, 28]. These methods can be categorized into two main tacks. One approach tackles this problem at inference time by either proposing more efficient samplers [12, 28] or relying on higher-order solvers [18, 19]. More closely related to ours are methods that aim at directly training a diffusion model that can solve the generative problem at hand in fewer steps through temporal distillation [21, 26, 29]. Our method tackles the problem at training time as well but we propose scale distillation. Our main idea is to reduce the inference speed by progressively making the generative problem easier during training. Notably, our approach is orthogonal to temporal distillation and can be used in tandem with it.", + "bbox": [ + 212, + 176, + 787, + 342 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 YONOS-SR", + "text_level": 1, + "bbox": [ + 215, + 367, + 369, + 383 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this section, we describe YONOS-SR, our diffusion-based model for image super-resolution. First, we present an overview of the image super-resolution framework with the latent diffusion models in Sec. 3.1. We then discuss our proposed scale distillation method that allows us to improve the performance with fewer sampling steps, e.g. 1-step, in Sec. 3.2. Finally, in Sec. 3.3, we discuss how the 1-step diffusion model allows for fine-tuning a decoder directly on top of the diffusion model, with a frozen U-Net.", + "bbox": [ + 212, + 400, + 787, + 506 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Super-resolution with latent diffusion models", + "text_level": 1, + "bbox": [ + 215, + 530, + 633, + 545 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Given a training set in the form of pairs of low and high-resolution images $(\\mathbf{x}_h,\\mathbf{x}_l)\\sim p(\\mathbf{x}_h,\\mathbf{x}_l)$ , the task of image super-resolution involves estimating the probability distribution of $p(\\mathbf{x}_h|\\mathbf{x}_l)$ . The stable diffusion framework uses a probabilistic diffusion model applied on the latent space of a pre-trained and frozen autoendoer. Let us assume that $\\mathbf{z}_h = \\mathcal{E}(\\mathbf{x}_h),\\mathbf{z}_l = \\mathcal{E}(\\mathbf{x}_l)$ be the corresponding projection of a given low and high-resolution images $(\\mathbf{x}_h,\\mathbf{x}_l)$ , where $\\mathcal{E}$ is the pre-trained encoder. The forward process of the diffusion model, $q(\\mathbf{z}|\\mathbf{z}_h)$ is a Markovian Gaussian process defined as", + "bbox": [ + 212, + 556, + 787, + 676 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nq \\left(\\mathbf {z} _ {t} \\mid \\mathbf {z} _ {h}\\right) = \\mathcal {N} \\left(\\mathbf {z} _ {t}; \\alpha_ {t} \\mathbf {z} _ {h}, \\sigma_ {t} \\mathbf {I}\\right), \\quad \\mathbf {z} = \\left\\{\\mathbf {z} _ {t} \\mid t \\in [ 0, 1 ] \\right\\} \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 338, + 691, + 784, + 707 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{z}$ denotes the latent variable of the diffusion model and $\\alpha_{t},\\sigma_{t}$ define the noise schedule such that the log signal-to-noise ratio, $\\lambda_t = \\log [\\alpha_t^2 /\\sigma_t^2 ]$ , decreases with $t$ monotonically. During training, the model learns to reverse this diffusion process progressively, i.e. estimate $p(\\mathbf{z}_{t - 1}|\\mathbf{z}_t)$ , to generate new data from noise.", + "bbox": [ + 212, + 718, + 785, + 779 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The super-resolution objective function is derived by maximizing a variational lower bound of the data log-likelihood of $p(\\mathbf{z}_h|\\mathbf{z}_l)$ via approximating the backward denoising process of $p(\\mathbf{z}_h|\\mathbf{z}_t,\\mathbf{z}_l)$ . Note that, for super-resolution, the denoising process is conditioned on the low-resolution input, $\\mathbf{z}_l$ , as well. This can", + "bbox": [ + 212, + 780, + 787, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "be estimated by the function $\\hat{\\mathbf{z}}_{\\theta}(\\mathbf{z}_t,\\mathbf{z}_l,\\lambda_t)$ parametrized by a neural network. We can train this function via a weighted mean square error loss.", + "bbox": [ + 212, + 146, + 787, + 176 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\underset {\\theta} {\\operatorname {a r g m i n}} \\mathbb {E} _ {\\epsilon , t} [ \\omega (\\lambda_ {t}) | | \\hat {\\mathbf {z}} _ {\\theta} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l}, \\lambda_ {t}) - \\mathbf {z} _ {h} | | _ {2} ^ {2} ] \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 359, + 204, + 785, + 228 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "over uniformly sampled times $t \\in [0,1]$ and $\\mathbf{z}_t = \\alpha_t \\mathbf{z}_h + \\sigma_t \\epsilon$ , $\\epsilon \\sim \\mathcal{N}(0,I)$ . There are several choices of weighting function $\\omega(\\lambda_t)$ . We use the so-called v parameterization [26], $(1 + \\frac{\\alpha_t^2}{\\sigma_t^2})$ , throughout this paper.", + "bbox": [ + 212, + 239, + 784, + 291 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The inference process from a trained model involves a series of sequential calls, i.e. steps, of $\\hat{\\mathbf{z}}_{\\theta}$ , starting from $\\mathbf{z}_1 \\sim \\mathcal{N}(0, I)$ , where the quality of the generated image improves monotonically with the number of steps as shown in the qualitative examples of Fig .1 and quantitative results of Fig. 3. Several methods have been proposed to reduce the number of required steps at inference time [18, 19, 28]. Here, we use the widely used DDIM sampler [28], and yet see that the performance drops drastically with an extremely low number of steps. In the following, we introduce scale distillation to alleviate this shortcoming.", + "bbox": [ + 212, + 292, + 787, + 412 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.2 Scale distillation", + "text_level": 1, + "bbox": [ + 215, + 435, + 398, + 450 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The complexity of the image super-resolution task increases with the scale factor (SF). For example, a model trained for a lower SF ( $e.g. \\times 2$ ) takes as input a less degraded image compared to a larger SF ( $e.g. \\times 4$ ). Therefore, a diffusion model trained for $\\times 2$ magnification should require fewer inference steps to solve the HR image generation task compared to a model trained for the x4 scale factor.", + "bbox": [ + 212, + 460, + 787, + 537 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To alleviate the training complexity for larger scale factors, we build on this observation and propose a progressive scale distillation training strategy. In particular, we start by training a teacher for a lower SF that takes a less degraded image as input. We then use its prediction as a target to train the model for a higher factor as a student.", + "bbox": [ + 212, + 537, + 787, + 612 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Let $N$ be the target SF of interest. Standard training involves making pairs of low and high-resolution images, where the low-resolution image is smaller than the HR image by a factor of $1 / N$ . The common approach for generating the training pairs is to gather a set of high-resolution images, perform synthetic degradation to obtain the corresponding low-resolution image and train a model that directly performs $\\times N$ magnification [22, 32, 34] using eq. 2. Instead, we start by training a standard diffusion-based teacher for a lower SF, using a less degraded LR image, e.g. $2 / N$ , as input and use its prediction to train the student.", + "bbox": [ + 212, + 613, + 787, + 734 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "More precisely, Let us assume $\\hat{\\mathbf{z}}_{\\phi}, \\hat{\\mathbf{z}}_{\\theta}$ be the teacher and student denoising models parameterized by $\\phi, \\theta$ respectively. To train the student for a factor of $N$ , we generate two degraded images for a given high-resolution image with factors $1/N, 2/N$ , with latent representations denoted by $\\mathbf{z}_l, \\mathbf{z}_l'$ respectively. That means $\\mathbf{z}_l'$ is less degraded compared to $\\mathbf{z}_l$ . Similar to the standard diffusion model training, we sample random noise at $t$ and add it to the high-resolution image to obtain $\\mathbf{z}_t$ . The scale distillation loss will be:", + "bbox": [ + 212, + 734, + 787, + 839 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 127 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/38a9bae124367f962eb6df6c7f926ca744746af1e5888a17a6c5ed30e9674a15.jpg", + "image_caption": [ + "Fig. 2: Training pipeline of proposed scale distillation. For a given HR image (e.g. size $512 \\times 512$ ) shown in green, we generate two degraded versions with factors of $2 / N, 1 / N$ (e.g. sizes $256 \\times 256$ and $128 \\times 128$ ), shown in yellow and red respectively. Both degraded images are resized back via bicubic upsampling to $512 \\times 512$ to be used as input to the encoder, which projects them to $4 \\times 64 \\times 64$ tensors. The less and more degraded LR image is used as input to the teacher and student respectively via concatenation with the noisy version of the HR image, i.e. $\\mathbf{z}_t$ . The teacher's output is used as the target for training the student. Note that the teacher is first trained independently for a smaller magnification scale and then frozen during student training." + ], + "image_footnote": [], + "bbox": [ + 287, + 143, + 717, + 358 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\underset {\\theta} {\\operatorname {a r g m i n}} \\mathbb {E} _ {\\epsilon , t} [ \\omega (\\lambda_ {t}) | | \\hat {\\mathbf {z}} _ {\\theta} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l}, \\lambda_ {t}) - \\hat {\\mathbf {z}} _ {\\phi} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l} ^ {\\prime}, \\lambda_ {t}) | | _ {2} ^ {2} ] \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 323, + 547, + 785, + 573 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where the teacher is trained for $N / 2$ magnification and frozen, and the student is initialized with the teacher's weights before the training. Note that we are using the latent diffusion framework that allows exactly the same architecture and input shapes for both the teacher and the student. Although the input low-resolution images for the student and teacher are of different sizes, they are both resized to a fixed size and fed to the encoder, which projects them to a tensor with a fixed size of $4 \\times 64 \\times 64$ . Fig. 2 illustrates the proposed scale distillation process.", + "bbox": [ + 212, + 583, + 787, + 702 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The idea of scale distillation is in line with that of progressive temporal distillation [26]. While a standard denoising model would only use the final image as the target irrespective of the sampled time step $t$ (see Eq. 2), both scale and progressive temporal distillation rely on the teacher to provide a supervisory signal specific for step $t$ (see Eq. 3). In this way, the supervisory signal is attuned to the specific denoising step, providing stable and consistent supervision at every denoising step. Fig. 3 provides empirical support for our hypothesis. We observe a significant gap between the distilled models from $\\times 2$ to $\\times 4$ and $\\times 2$ to $\\times 8$ compared to the models that are directly trained for $\\times 4$ and $\\times 8$ , respectively.", + "bbox": [ + 212, + 704, + 787, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 114, + 784, + 125 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/51c5031199a3dfaae7a222b87d175103cd4bb5ac5c1ab0b85a00ec8965ca36d0.jpg", + "image_caption": [ + "×4" + ], + "image_footnote": [], + "bbox": [ + 282, + 185, + 480, + 303 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3c1f8b06bd14586d60a1964f2a841bcf04589e3cd52dc0b422c69e224ca2893e.jpg", + "image_caption": [ + "×8", + "Fig. 3: FID vs. number of DDIM steps on the DIV2K validation set obtained through bicubic degradation using SD for $\\times 4$ and $\\times 8$ magnifications trained with scale distillation and standard training. We use $\\times 2 \\rightarrow \\times 4$ scale distillation for $\\times 4$ and $\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8$ for $\\times 8$ , and compare with the standard training directly for $\\times 4$ and $\\times 8$ respectively. All results are obtained using the original SD decoder. The model trained with scale distillation outperforms the standard training with large margin when using fewer steps for $\\times 4$ . The gap between scale distillation and the standard training is significantly higher for small $\\times 8$ and remains steady for large numbers steps." + ], + "image_footnote": [], + "bbox": [ + 511, + 186, + 710, + 303 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The gap is especially striking when evaluated with few inference steps and, as expected, shrinks as the number of steps increases and quality saturates.", + "bbox": [ + 212, + 472, + 784, + 503 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Similar to the temporal progressive distillation [26], the proposed scale distillation process can be applied iteratively with higher scale factors at each training step. The first student is initialized from scratch and trained on the raw data, similar to the standard training. Consequently, this student becomes the new teacher for training the next scale factor. In this paper, we consider three distillation steps up to the scale factor of $\\times 8$ starting from $\\times 2$ , i.e. $\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8$ . As it is shown in Fig. 3, scale distillation is significantly more effective for $\\times 8$ magnification where the LR image is of even lower quality, thereby reinforcing the importance of our proposed progressive scale training strategy.", + "bbox": [ + 212, + 503, + 787, + 640 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "3.3 Decoder fine-tuning", + "text_level": 1, + "bbox": [ + 214, + 662, + 426, + 678 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "While scale distillation improves the one-step inference noticeably, there is still a gap between the one-step model and the saturated performance with a larger number of steps, see Fig. 3. To fill this gap, we propose to fine-tune the decoder on top of the frozen one-step diffusion model resulting from scale distillation. That is, after training the diffusion model, we freeze the U-Net, apply one DDIM step for a given LR image, and use it as input to fine-tune the decoder for the SR task. We use the original loss that has been used for training the autoencoder [22]. Importantly, this fine-tuning strategy with the U-Net in place is only possible with a diffusion model that can work properly with one step as enabled by our scale distillation approach; see Table. 3. We empirically show that the", + "bbox": [ + 212, + 688, + 787, + 840 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 126 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "combination of our scale distillation approach with decoder fine-tuning yields a one-step model that can readily compete with models requiring a large number of inference steps.", + "bbox": [ + 212, + 146, + 782, + 191 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Implementation details. We use Stable diffusion v1.5 as our backbone and initialize our teacher with the text-to-image model. We use our own implementation of the v-parameterization with a cosine schedule. We use 4 A100 GPUs for all our experiments and train with a batch size of 60 with a gradient accumulation factor of 4.", + "bbox": [ + 212, + 205, + 787, + 282 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 214, + 303, + 375, + 320 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In this section, we evaluate our YONOS-SR against other methods targeting real image super-resolution at the standard $\\times 4$ scale factor in Sec. 4.1 and demonstrate that our proposed scale distillation approach generalizes to higher scale factors of $\\times 8$ in Sec. 4.2. We then provide qualitative results for $\\times 4$ and $\\times 8$ in Sec. 4.3. Finally, we perform ablation studies to highlight the role of our main contributions in Sec. 4.4.", + "bbox": [ + 212, + 333, + 787, + 426 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Evaluation on real image super resolution", + "text_level": 1, + "bbox": [ + 214, + 446, + 606, + 462 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We begin by evaluating the performance of our proposed YONOS-SR model in the standard real image super-resolution setting targeting $\\times 4$ scale factor.", + "bbox": [ + 214, + 469, + 785, + 501 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Datasets. Following previous work (e.g. [2,32,34,41]), we use DIV2K [1], DIV8K [7], Flickr2k [30], OST [36] and a subset of 10K images from FFHQ training set [13] to train our model. We adopt the Real-ESRGAN [34] degradation pipeline to generate synthetic LR-HR pairs.", + "bbox": [ + 212, + 513, + 785, + 573 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We then evaluate our model on both synthetic and real datasets. Similar to [32], we use 3K LR-HR (128 → 512) pairs synthesized from the DIV2K validation set using the Real-ESRGAN degradation pipeline as our synthetic dataset. We also report results on the standard DIV2K validation split with bicubic degradations for completeness. For the real dataset, we use $128 \\times 128$ center crops from the RealSR [11], DRealSR [38] and DPED-iphone [10] datasets.", + "bbox": [ + 212, + 574, + 787, + 667 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Evaluation metrics. We evaluate using various perceptual and image quality metrics, including LPIPS [43], FID [9] (where applicable), as well as the no-reference image quality metric, MUSIQ [14]. For the synthetic datasets, we also report standard PSNR and SSIM metrics, for reference.", + "bbox": [ + 214, + 679, + 785, + 739 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Baselines. As the main contribution of our paper targets improving the inference process of diffusion-based super-resolution, our main points of comparison are diffusion-based SR models, including the recent StableSR model [32], ReshShift [40], and the original LDM model [22]. For completeness, we also include comparison to other non-diffusion-based baselines, including; RealSR [11], BSRGAN [41], RealESRGAN [34], DASR [16] and FeMaSR [2].", + "bbox": [ + 212, + 753, + 787, + 847 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/77edf0af07cb2406b2fa6b1e728f9e81d03646b18605fae66c2fcb2ea222cfb4.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetsMetricsRealSRBSRGANDASRReal-ESRGAN +FeMaSRLDMResShiftStableSRYONOS (ours)
DIV2K Valid RealESRGAN degradationsFID ↓49.4944.2249.1637.6435.8726.4730.4524.4421.86
LPIPS ↓0.52760.33510.35430.31120.31990.25100.30760.31140.2310
PSNR ↑24.6224.5824.4724.2823.0623.3224.6223.2624.74
SSIM ↑0.59700.62690.63040.63720.58870.57620.62100.57260.6428
MUSIQ ↑28.5761.1955.1961.0560.8362.2763.5865.9270.30
DIV2K Valid bicubic degradationsLPIPS ↓-0.23640.16960.2284-0.23230.17750.25800.1703
PSNR ↑-27.3228.5526.65-25.4927.2421.9026.26
RealSRLPIPS ↓0.35700.26560.31340.27090.29370.31590.32790.30020.2479
MUSIQ ↑38.2663.2841.2160.3659.0658.9059.8765.8869.21
DRealSRLPIPS ↓0.39380.28580.30990.28180.31570.33790.38700.32840.2721
MUSIQ ↑26.9357.1642.4154.2653.7153.7254.1358.5166.26
DPED-iphoneMUSIQ ↑45.6045.8932.6842.4249.9544.2338.5950.4859.45
-# STEPS ↓-----20042001
", + "bbox": [ + 218, + 143, + 777, + 268 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Table 1: Comparison to baselines. Results in Red and Blue correspond to best and second best results, resp. Cells with - indicate that there were no previously reported results using the considered baseline and corresponding metric.", + "bbox": [ + 215, + 270, + 784, + 310 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Results. Results summarized in Tab. 1 show that YONOS-SR outperforms all other diffusion-based SR methods, while using only one inference step, whereas other alternatives use 200 inference steps. These results highlight the efficiency of YONOS-SR in reducing the number of steps to one without compromising performance but indeed improving it further. Also, our model outperforms all considered baselines in 5 out of 7 metrics on the synthetic data and all comparison points on the real datasets.", + "bbox": [ + 212, + 344, + 784, + 450 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.2 Generalization to higher scale factors", + "text_level": 1, + "bbox": [ + 215, + 470, + 568, + 486 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "We now evaluate the generalization capability of our proposed scale distillation approach. To this end, we train our YONOS-SR model with one more iteration of scale distillation, thereby going from a model capable of handling $\\times 4$ magnifications to $\\times 8$ magnifications. We then fine-tune the decoder on top of the one-step $\\times 8$ diffusion model. To evaluate this model, we follow recent work [3], and evaluate on the same subset of ImageNet and FFHQ for $\\times 8$ magnification, i.e. $64 \\times 64 \\rightarrow 512 \\times 512$ . In particular, we select the same 1k subset of ImageNet test set by first ordering the 10k images by name and then selecting the 1k subset via interleaved sampling, i.e. using images of index 0, 10, 20, etc. To obtain the LR-HR pairs, we use $\\times 8$ average pooling degradations. In the case of FFHQ, we use the first 1k images of the validation set. We also evaluate using the same metrics and baselines reported in this recent work [3].", + "bbox": [ + 212, + 493, + 785, + 674 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "The results summarized in Tab. 2 demonstrate that our proposed one-step method generalizes well to higher scale factors, where it is able to achieve good results in terms of FID and LPIPS scores, which are known to better align with human observation, especially at higher magnification factors [24]. Notably, unlike baselines, our model has not been trained on ImageNet data. We use only $10\\mathrm{k}$ images of FFHQ in our training set.", + "bbox": [ + 212, + 675, + 785, + 765 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.3 Qualitative evaluation", + "text_level": 1, + "bbox": [ + 215, + 786, + 442, + 801 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In addition to extensive quantitative evaluations, we qualitatively compare one-step YONOS-SR with 200-step StableSR and standard diffusion-based SR (SD-", + "bbox": [ + 212, + 809, + 784, + 839 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 127 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/c771e3ae9778fc241b9b90ee0fee4a35e24bd82df655e6a0419a959874d1b029.jpg", + "image_caption": [ + "(a)" + ], + "image_footnote": [], + "bbox": [ + 243, + 170, + 364, + 263 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/908bb75aaa052e5444c7a7d6f4693f968c94ac6f805741c821bf75acd5fdb5fb.jpg", + "image_caption": [ + "(b)" + ], + "image_footnote": [], + "bbox": [ + 367, + 172, + 486, + 263 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/2c167900d29cb02d87af2202ec7c2be66e3c7961e6f56ec644423dccb60b58f0.jpg", + "image_caption": [ + "(c)" + ], + "image_footnote": [], + "bbox": [ + 488, + 172, + 609, + 263 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/9aa69fe72f9a8d70bebe6b5bb9b9f3aff5336855020634833dca2ceedc0d87ee.jpg", + "image_caption": [ + "(d)" + ], + "image_footnote": [], + "bbox": [ + 611, + 172, + 730, + 265 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/a8cc9a86546513622047ec53b04d2ac89b96a36edd9fe6b1fab1a2d5eade7f05.jpg", + "image_caption": [ + "(a)" + ], + "image_footnote": [], + "bbox": [ + 243, + 309, + 364, + 401 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/67e1ef51f8bf256f3f80987f81668d90462a5ea3be86686fbc5dab64216b99ed.jpg", + "image_caption": [ + "(b)" + ], + "image_footnote": [], + "bbox": [ + 367, + 309, + 486, + 401 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/acdf924761d980a659c116994952304cf7fa3f2974ab5e37c54ae4460be1a618.jpg", + "image_caption": [ + "(c)" + ], + "image_footnote": [], + "bbox": [ + 488, + 309, + 607, + 401 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/3e27db1e4bee637ca789fa88e1c4dd09ec029857d4e5a7c777d54ce579395533.jpg", + "image_caption": [ + "(d)" + ], + "image_footnote": [], + "bbox": [ + 611, + 309, + 730, + 401 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/0c5cba40bd1c90df8aaff15ce11537aa80574905614ae618298e4a0d91bf988d.jpg", + "image_caption": [ + "(a)" + ], + "image_footnote": [], + "bbox": [ + 243, + 446, + 364, + 539 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/c61e86ace96644ff4ed22f4c84c64aeaed8093e6b634898e407cec2c5d7c38e3.jpg", + "image_caption": [ + "(b)" + ], + "image_footnote": [], + "bbox": [ + 367, + 446, + 486, + 539 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/6b75b67edbfd4d38a09947eed4abbceab0f002956fdd1e669d0034f232669cd3.jpg", + "image_caption": [ + "(c)" + ], + "image_footnote": [], + "bbox": [ + 488, + 446, + 609, + 539 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/76b209a0f120a6784bc8dcb9da09754e330bac7d1b4801e55b8974cb0b3efa99.jpg", + "image_caption": [ + "(d)", + "Fig. 4: Qualitative comparison on the validation set of DIV2K dataset: (a) 200-step StableSR (b) 200-step standard SD-SR (c) 1-step YONOS(ours) (d) the ground truth. SD-SR represents the standard Stable Diffusion-based SR model. 200-step StableSR and SD-SR tend to over-sharpen, adding artifacts that do not match the ground truth content. Our SR images match the most with the corresponding ground truth image; see the faces, Pepsi, and crocodile textures in the first, second, and third rows, respectively. The images are best seen in a display and zoomed in." + ], + "image_footnote": [], + "bbox": [ + 611, + 446, + 730, + 539 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "SR) in Fig. 4. Our method generates the closest SR images to the ground truth in terms of detailed textures while taking only 1-step during the inference. These observations are in line with the numerical superiority of our method in the quantitative evaluations above.", + "bbox": [ + 212, + 684, + 787, + 744 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "As it is clearly demonstrated in Fig. 3, scale distillation is even more effective for $\\times 8$ compared to $\\times 4$ magnification. As a qualitative support, we compare the model trained directly for $\\times 8$ magnification without scale distillation to our model trained with three iterations of scale distillation $\\times 2\\rightarrow \\times 4\\rightarrow \\times 8$ in Fig. 5. Again, we use the validation set of DIV2K dataset. In line with the numerical analyses in Fig. 3, we observe that the model trained with scale distillation out-", + "bbox": [ + 212, + 750, + 787, + 839 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 116, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/1593eaecf963deb3f2ca889ae6bd00e11a858ceb3d06d5fd5f0a9f652b65d7bf.jpg", + "image_caption": [ + "(LR)" + ], + "image_footnote": [], + "bbox": [ + 217, + 180, + 336, + 275 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/4bd1fc4401795e4edfcd4e705006a0a6a967c3b794a42e0ae62c96c756c39459.jpg", + "image_caption": [ + "8 8", + "(64 steps)" + ], + "image_footnote": [], + "bbox": [ + 367, + 180, + 488, + 275 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/c67e480cb8323b088bb764ad4f7ee50accb7984e147e4253ae94c0eedb925271.jpg", + "image_caption": [ + "(4 steps)" + ], + "image_footnote": [], + "bbox": [ + 488, + 181, + 609, + 275 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/c83f145995c8d7e7db504f1d8c9d855cf4f0c13475a5d877874301187ca874c9.jpg", + "image_caption": [ + "(1 step)" + ], + "image_footnote": [], + "bbox": [ + 611, + 181, + 728, + 275 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/98e449cae1f0a5894b1a17e3f8e6654bc6a5976f197a660dba5ab10a955c4ba2.jpg", + "image_caption": [ + "(HR)" + ], + "image_footnote": [], + "bbox": [ + 215, + 318, + 338, + 412 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/60dafed7a0530187c0f075172d34e87e7c33c274bd6e0e29cc367efd02ec18d9.jpg", + "image_caption": [ + "eannnnnne", + "(64 steps)" + ], + "image_footnote": [], + "bbox": [ + 367, + 319, + 488, + 412 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/48f6340e5ad851e9eb417d20b808938e21affb4f3c7a8030500edd54a68286a5.jpg", + "image_caption": [ + "(4 steps)" + ], + "image_footnote": [], + "bbox": [ + 488, + 319, + 607, + 412 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/18fbc3a4f44ed175fc74a49abbb31236e4fe684ce6761085383fedce2ea75791.jpg", + "image_caption": [ + "(1 step)" + ], + "image_footnote": [], + "bbox": [ + 609, + 319, + 728, + 412 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/17d41dc09b3261d9601d38f6881aa572f1c1b2862d5cf3e68ae6ec28d6589c15.jpg", + "image_caption": [ + "(LR)", + "aee" + ], + "image_footnote": [], + "bbox": [ + 215, + 454, + 338, + 549 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/6dd96e572e8410ae31f707ec7f7d895970dbc3b6ef1fd587093e982f66422b73.jpg", + "image_caption": [ + "(64 steps)" + ], + "image_footnote": [], + "bbox": [ + 367, + 455, + 488, + 549 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/471d2a50f6ca06a1626076b9868d9041ab7b9490b3d477bc2b60ea97028454bc.jpg", + "image_caption": [ + "(4 steps)" + ], + "image_footnote": [], + "bbox": [ + 488, + 455, + 607, + 549 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/b3e30bf510e75e692415248f6a526f56d5a9d347630a644c665c472c78ab6f77.jpg", + "image_caption": [ + "(1 step)" + ], + "image_footnote": [], + "bbox": [ + 609, + 455, + 728, + 549 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/73b71fd4760d74e9535b5c910440e83be3f6733c46e453fbf084325f412cb714.jpg", + "image_caption": [ + "(HR)" + ], + "image_footnote": [], + "bbox": [ + 215, + 592, + 338, + 685 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/1b523942e03785179fa398d3e42a6a78ee1dbbc3616de7d420c03e66ff55d182.jpg", + "image_caption": [ + "Scale distillation $\\times 2\\uparrow \\uparrow \\times 4\\times 8$", + "(64 steps)", + "Fig. 5: Qualitative comparison on the validation set of DIV2K dataset for $\\times 8$ magnification when the model is trained directly for $\\times 8$ magnification without scale distillation (top row) and with three iterations of scale distillation $\\times 2\\rightarrow \\times 4\\rightarrow \\times 8$ (bottom row). We show the input LR image results with 1, 4, and 64 steps using the original decoder and the corresponding HR image for both models. The model trained with scale distillation outperforms the standard training with high margins. Specifically, due to poor LR input, the standard training fails to recover the relevant content. The images are best seen in a display and zoomed in." + ], + "image_footnote": [], + "bbox": [ + 367, + 593, + 488, + 686 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/74a5aa44e0bd85e98a0dcacdf7383c49daeb3227549af0f5b966c487b1d24a94.jpg", + "image_caption": [ + "(4 steps)" + ], + "image_footnote": [], + "bbox": [ + 488, + 593, + 607, + 686 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/2ce0dcc22a0d4bcf0b451c9b644357d74b96bb470c8d8cb445e7c90d66ac3ff1.jpg", + "image_caption": [ + "(1 step)" + ], + "image_footnote": [], + "bbox": [ + 609, + 593, + 728, + 686 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 126 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/e73d077503b500998ad5e8446455621bdc1ceb203c0ee54640edc36f86995099.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ImagenetFFHQ
FID ↓LPIPS ↓PSNR ↑FID ↓LPIPS ↓PSNR ↑
LDPS61.090.47523.2136.810.29228.78
GML-DPS [23]60.360.45623.2141.650.31828.50
PSLD [23]60.810.47123.1736.930.33526.62
LDIR [8]63.460.48022.2336.040.34525.79
P2L [3]51.810.38623.3831.230.29028.55
YONOS (ours)34.590.24122.8021.410.16126.08
", + "bbox": [ + 290, + 143, + 709, + 251 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Table 2: Comparison to baselines on ImageNet subset with x8 magnification factor. The results for other methods are taken from [3].", + "bbox": [ + 215, + 252, + 782, + 280 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "performs the standard training in terms of recovering the corresponding content and details. Note that, the problem of $\\times 8$ magnification is of significantly higher complexity compared to $\\times 4$ due to poor LR input. Notable for these $\\times 8$ qualitative evaluations we use the original decoder (i.e. these results are obtained before the decoder finetuning stage) to emphasize the impact of scale distillation.", + "bbox": [ + 212, + 287, + 784, + 362 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4.4 Ablation study", + "text_level": 1, + "bbox": [ + 215, + 378, + 385, + 395 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "We now study the impact of the various components introduced in our work. To this end, we use the standard DIV2K validation set with $\\times 4$ low-resolution images obtained through bicubic degradation [1]. We use the FID metric as it is a standard metric for assessing the quality of generative models. Our initial investigation also revealed that FID correlates the most with the human evaluation of the generated images. The validation set of the DIV2K dataset includes only 100 samples. To obtain more reliable FID scores, we extract 30 random $128 \\times 128$ patches and their corresponding $512 \\times 512$ HR counterparts from each image in the standard DIV2K bicubic validation set, resulting in a total of 3k LR-HR pairs. For completeness, we also report LPIPS, PSNR, and SSIM scores.", + "bbox": [ + 212, + 402, + 784, + 553 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Impact of scale distillation. We begin by evaluating the impact of our proposed scale distillation on speeding up inference time. To this end, we run two stable diffusion (SD) models trained for $\\times 4$ super-resolution (SR), with various numbers of inference steps. The first model is a standard SD super-resolution model trained directly for target $\\times 4$ super-resolution (i.e. SD-SR), while the second model is trained with our proposed scale distillation from $\\times 2$ magnification to $\\times 4$ . We use the same model, training set, and degradation pipeline in training both models. The only difference is the use of our scale distillation in the later model. Specifically, we start with training a teacher for $\\times 2$ magnification using raw data as a denoising target. We use the $\\times 2$ model as a frozen teacher and use its prediction to train a student for $\\times 4$ magnification. The results summarized in Fig. 3 speaks decisively in favor of our scale distillation approach. We can see that the model trained with the proposed scale distillation performs significantly better than direct $\\times 4$ training when using only one step.", + "bbox": [ + 212, + 568, + 784, + 779 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Scale distillation outperforms the standard training more significantly for $\\times 8$ magnification where we perform three training iterations for scale distillation, i.e. $\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8$ . One reason for the larger gap for $\\times 8$ magnification is that the SR task is more ambiguous for $\\times 8$ magnification due to lower quality input.", + "bbox": [ + 212, + 779, + 784, + 839 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "As a result, the model benefits more from the more simplified supervisory signal obtained from scale distillation. Note that we use the original SD decoder (i.e. no decoder finetuning) for this experiment to analyze the impact of the scale distillation independently of decoder fine-tuning.", + "bbox": [ + 212, + 146, + 787, + 207 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Impact of decoder fine-tuning. One of the direct consequences of having a diffusion model that can yield good results in one denoising step is that it allows for decoder fine-tuning with the U-Net in place, as it will directly give a good starting point to the decoder. To validate the importance of the input given to the decoder prior to fine-tuning and, thereby, the importance of YONOS-SR, we experiment with the standard SD-SR model and our scale distillation model. In both cases, we freeze the U-Net and only allow the", + "bbox": [ + 212, + 224, + 495, + 436 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "models to do 1 denoising step. We then feed their output to the decoder and fine-tune it following the same loss used in the original stable diffusion model [22].", + "bbox": [ + 212, + 436, + 785, + 467 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The results summarized in Tab. 3 validate the importance of having a good initial input from the diffusion model prior to decoder fine-tuning. The left chunk shows that the model trained with scale distillation outperforms the standard training with a good margin when using the original decoder, indicating that the scale distillation results in a U-Net that provides a higher quality input for the decoder. Moreover, as we can see in the right chunk of Tab. 3, fine-tuning the decoder on top of both 1-step models improves the performance. However, the model with scale distillation yields significantly better results than the standard SD-SR directly trained for the target magnification. Once again, the impact of scale distillation is more sensible for $\\times 8$ magnification than $\\times 4$ , which highlights the importance of our approach in such difficult settings. Importantly, this fine-tuning strategy is not computationally feasible with diffusion models that require many denoising steps to give a reasonable starting point for the decoder.", + "bbox": [ + 212, + 468, + 787, + 664 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/dae694401bb3c342b3af01503134343711c5059f370cfecf849b04fcfa71f032.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DecoderOriginalFine-tuned
Scale distillationXX
FID ↓27.9323.9616.2615.54
LPIPS ↓0.2270.2070.1630.159
PSNR ↑25.9426.2425.7326.30
SSIM ↑0.7110.7140.7130.727
FID ↓102.9266.9041.5428.47
LPIPS ↓0.5410.4030.3050.243
PSNR ↑21.0824.4621.5323.96
SSIM ↑0.5410.6470.5280.632
", + "bbox": [ + 532, + 237, + 756, + 351 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step.", + "bbox": [ + 503, + 362, + 787, + 405 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 214, + 686, + 359, + 704 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In summary, in this paper, we introduced the first fast stable diffusion-based super-resolution method. To achieve this, we introduced scale distillation, an approach that allows us to tackle the SR problem in as little as one step. Having a fast diffusion model allowed us to directly fine-tune the decoder, which we show yields state-of-the-art results, even at high magnification factors and only using a single step. We hope that the proposed distillation approach could be adapted for other inverse imaging problems (e.g. image inpainting), which we believe is an interesting direction for future research.", + "bbox": [ + 212, + 719, + 787, + 839 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step.", + "bbox": [ + 503, + 362, + 787, + 405 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 127 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 143, + 321, + 159 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017)", + "2. Chen, C., Shi, X., Qin, Y., Li, X., Han, X., Yang, T., Guo, S.: Real-world blind super-resolution via feature matching with implicit high-resolution priors. In: ACM International Conference on Multimedia (2022)", + "3. Chung, H., Ye, J.C., Milanfar, P., Delbracio, M.: Prompt-tuning latent diffusion models for inverse problems. In: arXiv preprint arXiv: 2310.01110 (2023)", + "4. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision (2014)", + "5. Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world superresolution. In: IEEE International Conference on Computer Vision - Workshops (2019)", + "6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances on Neural Information Processing Systems (2014)", + "7. Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., Timofte, R.: Div8k: Diverse 8k resolution image dataset. In: IEEE International Conference on Computer Vision - Workshops (2019)", + "8. He, L., Yan, H., Luo, M., Luo, K., Wang, W., Du, W., Chen, H., Yang, H., Zhang, Y.: Iterative reconstruction based on latent diffusion model for sparse data reconstruction. In: arXiv preprint arXiv:2307.12070 (2023)", + "9. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances on Neural Information Processing Systems (2017)", + "0. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Gool, L.V.: Dslr-quality photos on mobile devices with deep convolutional networks. In: IEEE International Conference on Computer Vision (2017)", + "1. Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., Huang, F.: Real-world super-resolution via kernel estimation and noise injection. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2020)", + "2. Jolicoeur-Martineau, A., Li, K., Piché-Taillefer, R., Kachman, T., Mitliagkas, I.: Gotta go fast when generating data with score-based models. In: arXiv preprint arXiv:2105.14080 (2021)", + "3. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)", + "4. Ke, J., Wang, Q., Wang, Y., Milanfar, P., Yan, F.: Musiq: Multi-scale image quality transformer. In: IEEE International Conference on Computer Vision (2021)", + "5. Liang, J., Zhang, K., Gu, S., Van Gool, L., Timofte, R.: Flow-based kernel prior with application to blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021)", + "6. Liang, J., Zeng, H., Zhang, L.: Efficient and degradation-adaptive network for real-world image super-resolution. In: European Conference on Computer Vision (2022)", + "7. Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image superresolution: A survey and beyond. In: arXiv preprint arXiv:2107.03055 (2021)" + ], + "bbox": [ + 225, + 178, + 784, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "18. Lu, C., Zhou, Y., Bao, F., Chen, J., LI, C., Zhu, J.: Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In: Advances on Neural Information Processing Systems (2022)", + "19. Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., Zhu, J.: Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. In: arxiv prepring arxiv: 2211.01095 (2023)", + "20. Maeda, S.: Unpaired image super-resolution using pseudo-supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)", + "21. Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J., Salimans, T.: On distillation of guided diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2023)", + "22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2022)", + "23. Rout, L., Raoof, N., Daras, G., Caramanis, C., and Sanjay Shakkottai, A.G.D.: Solving linear inverse problems provably via posterior sampling with latent diffusion models. In: NeurIPS (2023)", + "24. Sahak, H., Watson, D., Sahara, C., Fleet, D.: Denoising diffusion probabilistic models for robust image super-resolution in the wild. In: arXiv preprint arXiv: 2302.07864 (2023)", + "25. Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D.J., Norouzi, M.: Image superresolution via iterative refinement. preprint arXiv: 2104.07636 (2021)", + "26. Salimans, T., Ho, J.: Progressive distillation for fast sampling of diffusion models. In: International Conference on Learning Representations (2022)", + "27. Shocher, A., Cohen, N., Irani, M.: \"zero-shot\" superresolution using deep internal learning. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)", + "28. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021)", + "29. Song, Y., Dhariwal, P., Chen, M., Sutskever, I.: Consistency models. arXiv preprint arXiv:2303.01469 (2023)", + "30. Timofte, R., Agustsson, E., Gool, L.V., Yang, M., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017)", + "31. Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., Wen, F.: Bringing old photos back to life. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)", + "32. Wang, J., Yue, Z., Zhou, S., Chan, K.C., Loy, C.C.: Exploiting diffusion prior for real-world image super-resolution. In: arXiv preprint arXiv:2305.07015 (2023)", + "33. Wang, L., Wang, Y., Dong, X., Xu, Q., Yang, J., An, W., Guo, Y.: Unsupervised degradation representation learning for blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021)", + "34. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In: IEEE International Conference on Computer Vision - Workshops (2021)", + "35. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)", + "36. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + ], + "bbox": [ + 215, + 146, + 784, + 839 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "M. Noroozi et al.", + "bbox": [ + 271, + 114, + 387, + 127 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "37. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: ESRGAN: Enhanced super-resolution generative adversarial networks. In: European Conference on Computer Vision - Workshops (2018)", + "38. Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., Lin, L.: Component divide-and-conquer for real-world image super-resolution. In: European Conference on Computer Vision (2020)", + "39. Yan, Y., Liu, C., Chen, C., Sun, X., Jin, L., Peng, X., Zhou, X.: Fine-grained attention and feature-sharing generative adversarial networks for single image superresolution. In: IEEE Transactions on Multimedia (2021)", + "40. Yue, Z., Wang, J., Change Loy, C.: Ressift: Efficient diffusion model for image super-resolution by residual shifting. In: NeurIPS (2023)", + "41. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. In: IEEE International Conference on Computer Vision (2021)", + "42. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: IEEE International Conference on Computer Vision (2023)", + "43. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)", + "44. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (2017)" + ], + "bbox": [ + 212, + 146, + 787, + 452 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "YONOS-SR", + "bbox": [ + 648, + 114, + 730, + 126 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 16 + } +] \ No newline at end of file diff --git a/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_model.json b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d7b61df2f733bbbbd1d3355272f0e83e40b1b9bb --- /dev/null +++ b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_model.json @@ -0,0 +1,2555 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.225, + 0.141, + 0.78, + 0.186 + ], + "angle": 0, + "content": "You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation" + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.214, + 0.763, + 0.244 + ], + "angle": 0, + "content": "Mehdi Noroozi, Isma Hadji, Brais Martinez, Adrian Bulat, and Georgios Tzimiropoulos" + }, + { + "type": "text", + "bbox": [ + 0.37, + 0.256, + 0.635, + 0.284 + ], + "angle": 0, + "content": "Samsung AI Cambridge {m.noroozi,isma.hadji}@samsung.com" + }, + { + "type": "text", + "bbox": [ + 0.263, + 0.326, + 0.74, + 0.617 + ], + "angle": 0, + "content": "Abstract. In this paper, we introduce YONOS-SR, a novel stable diffusion based approach for image super-resolution that yields state-of-the-art results using only a single DDIM step. Specifically, we propose a novel scale distillation approach to train our SR model. Instead of directly training our SR model on the scale factor of interest, we start by training a teacher model on a smaller magnification scale, thereby making the SR problem simpler for the teacher. We then train a student model for a higher magnification scale, using the predictions of the teacher as a target during the training. This process is repeated iteratively until we reach the target scale factor of the final model. The rationale behind our scale distillation is that the teacher aids the student diffusion model training by i) providing a target adapted to the current noise level rather than using the same target coming from ground truth data for all noise levels and ii) providing an accurate target as the teacher has a simpler task to solve. We empirically show that the distilled model significantly outperforms the model trained for high scales directly, especially with few steps during inference. Having a strong diffusion model that requires only one step allows us to freeze the U-Net and fine-tune the decoder on top of it. We show that the combination of spatially distilled U-Net and fine-tuned decoder outperforms state-of-the-art methods requiring 200 steps with only one single step.\\(^{1}\\)" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.645, + 0.377, + 0.661 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.68, + 0.788, + 0.801 + ], + "angle": 0, + "content": "Diffusion models have shown impressive performance in various image generation tasks [22, 42], including image super-resolution (SR) [3, 24, 25, 32]. However, the large number of sequential denoising passes required by the sampling strategy results in extreme computational cost, even for stable diffusion-based models (SD) that operate in the latent space of an autoencoder. Recently, several approaches have been proposed to reduce the number of sampling steps [18, 26, 28, 29]. Unfortunately, such approaches usually compromise performance, especially for the lower number of steps." + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.811, + 0.787, + 0.84 + ], + "angle": 0, + "content": "1 The code will be available here once all approvals are processed: https://github.com/SamsungLabs/yonos" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.128 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.177, + 0.765, + 0.725 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.727, + 0.789, + 0.84 + ], + "angle": 0, + "content": "Fig. 1: Qualitative comparison for \\(\\times 4\\) and \\(\\times 8\\) magnifications. Each column shows top to bottom LR input image, 1 and 200 step SD-SR, 1-step YONOS-SR(ours). SD-SR represents the standard Stable Diffusion-based SR model. The 1-step SD-SR method lacks quality in terms of detailed textures compared to 200-steps of the same model; see building texture in the first column and hairs in the middle column. In contrast, our method outperforms 200-steps SD-SR with only one step, especially for \\(\\times 8\\) magnification where SD-SR fails to recover the details even with 200 steps. Samples are taken from DIV2K validation set. Images are best seen in a display and zoomed in." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.328 + ], + "angle": 0, + "content": "Typically, diffusion-based models yield the best results on image patches of similar sizes to those seen during training (e.g. \\(64 \\times 64\\) for SD [22]). On the other hand, super-resolution applications require operating in high-resolution settings, drastically exacerbating the computational issues of diffusion-based models. For example, a SR model that aims for a magnification of \\(\\times 4\\) going from \\(256 \\times 256\\) to \\(1024 \\times 1024\\) requires dividing the input image into 16 patches of \\(64 \\times 64\\) and running the model on each patch individually, making a large number of steps prohibitive for realistic use cases. Using state-of-the-art step-reduction strategy, such as more efficient samplers [18, 19, 28] can partially alleviate this issue but still falls widely short of practical needs. For example, going down to the target of 1 DDIM step results in a significant drop in performance compared to a typical model that does 200 inference steps, as shown in Fig. 1." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.332, + 0.788, + 0.528 + ], + "angle": 0, + "content": "One differentiating characteristic of the super-resolution task is that it is conditioned on the low-resolution (LR) input image to yield the target high-resolution (HR) image. Unlike the task of text-to-image generation, which relies on text conditioning, the LR image provides closer content to the target HR image, especially at lower scale factors. Therefore, conditioning the diffusion model on the LR image at low-scale factors makes the task inherently simpler for the diffusion model. In this paper, we take advantage of this peculiarity and introduce a novel training strategy dubbed scale distillation. While typical diffusion-based SR methods train the model for super-resolution by conditioning directly on the LR image at the target scale factor, we instead propose a progressive training approach, where we start by training a model for lower scale factors (i.e. where the conditioning signal is closer to the target) and progressively increase to the target scale factor using the previously trained model as a teacher." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.532, + 0.788, + 0.698 + ], + "angle": 0, + "content": "More specifically, instead of using the raw data to train a model for large scale factors, scale distillation obtains a rich and accurate supervisory signal from a teacher trained for a smaller scale factor. We first train a teacher that takes a less degraded image as input and, therefore, has an easier task to solve during training. Then, we train a model for a larger scale factor as a student while initializing it with the same weights as the teacher, which is now frozen. For a given time step during the training, we feed both teacher and student with the same noisy version of the HR image. However, we condition the teacher with the less degraded LR image (i.e. using the same scale that was used during teacher training), while we condition the student on the target (more degraded) LR image. We then use the teacher's prediction as a target to train the student." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.701, + 0.788, + 0.777 + ], + "angle": 0, + "content": "This training strategy has two direct advantages: i) Unlike typical training where the supervisory signal is somewhat ambiguous as the target is the same for all noise levels, our student receives its target from the teacher and is therefore adaptive to the noise level. ii) The target is more accurate, especially in terms of the finer detail, because the teacher takes a less degraded LR image as input." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.788, + 0.841 + ], + "angle": 0, + "content": "The proposed scale distillation approach allows the model to solve the SR task in fewer steps as we have simplified the task for the student. In fact, we show that models trained with our approach improve significantly when a few steps are used during the inference, e.g. one step, see Fig. 3. Therefore, a direct" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.387, + 0.127 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.223 + ], + "angle": 0, + "content": "advantage of the proposed approach is that fine-tuning the decoder directly on top of the diffusion model becomes computationally tractable due to the single inference step required. Taking advantage of this fine-tuning, we show that You Only Need One Step (YONOS)-SR outperforms state-of-the-art diffusion-based SR methods that require a large number (e.g. 200) of inference steps." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.223, + 0.785, + 0.329 + ], + "angle": 0, + "content": "In summary, our contributions are threefold: I) We introduce scale distillation to train SD models with a more accurate and fine supervisory signal for image super-resolution tasks. II) We show that our proposed scale distillation strategy yields more efficient SD models that allow for directly fine-tuning the decoder on top of a frozen one-step diffusion model. III) We show that combining scale distillation followed by decoder fine-tuning yields state-of-the-art results on the SR task, even at high magnification factors, while requiring only one step." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.354, + 0.383, + 0.37 + ], + "angle": 0, + "content": "2 Related work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.387, + 0.788, + 0.614 + ], + "angle": 0, + "content": "Real image super-resolution. Image super-resolution entails restoring a High Resolution (HR) image given its Low Resolution (LR) observation. Solving this task for real images is especially challenging given the dramatic differences in real-world image distributions [10, 11, 17, 38]. These differences arise from varying image degradation processes, different imaging devices, and image signal processing methods, all of which are difficult to properly model and generalize. For this reason, real image super-resolution (or blind super-resolution) has received significant interest among the research community [11, 16, 32-34, 37, 38, 41]. While some methods attempt to learn the degradation process [5, 20, 31, 39], their success remains limited due to the lack of proper large scale training data [17], even while using some unsupervised methods [44]. In contrast, more popular approaches tackle the problem by explicitly modeling the degradation pipeline to create synthetic LR-HR pairs to use for training [15, 27, 34, 41]. Given, the wider success of the explicit degradation modeling approach, we elect to rely on the widely used RealESRGAN degradation pipeline [34] in training our model." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.629, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Diffusion-based super-resolution. Since the early SRCNN [4] method, many deep learning-based solutions for blind super-resolution have been proposed [2, 11, 22, 24, 25, 34, 37, 41, 44]. Early work took advantage of this space by using semantic segmentation probability maps for guiding SR [35]. Most recent methods aim at taking advantage of learned generative priors to simplify the inverse imaging problem of blind image super-resolution. Usually, methods following this paradigm [34, 37, 41] rely on GANs [6]. More recently, diffusion models showed remarkable generative capabilities yielding impressive results across a range of applications [22, 42]. As such, in this paper, we follow several recent works [22, 24, 25, 32] and rely on diffusion-based generative models to tackle the super-resolution problem. While diffusion-based models achieve impressive results, their main shortcoming is the long inference time. Diffusion-based models require several inference steps through the model to yield a final output, thereby limiting their practical use. Therefore, in this paper, we tackle the important" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "text", + "bbox": [ + 0.216, + 0.147, + 0.736, + 0.162 + ], + "angle": 0, + "content": "problem of speeding up the inference of diffusion-based super-resolution." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.177, + 0.788, + 0.343 + ], + "angle": 0, + "content": "Guided distillation. Recognizing the inference speed shortcoming of diffusion models, several works have been proposed recently to address this issue [18, 19, 21, 26, 28]. These methods can be categorized into two main tacks. One approach tackles this problem at inference time by either proposing more efficient samplers [12, 28] or relying on higher-order solvers [18, 19]. More closely related to ours are methods that aim at directly training a diffusion model that can solve the generative problem at hand in fewer steps through temporal distillation [21, 26, 29]. Our method tackles the problem at training time as well but we propose scale distillation. Our main idea is to reduce the inference speed by progressively making the generative problem easier during training. Notably, our approach is orthogonal to temporal distillation and can be used in tandem with it." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.368, + 0.37, + 0.384 + ], + "angle": 0, + "content": "3 YONOS-SR" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.401, + 0.788, + 0.507 + ], + "angle": 0, + "content": "In this section, we describe YONOS-SR, our diffusion-based model for image super-resolution. First, we present an overview of the image super-resolution framework with the latent diffusion models in Sec. 3.1. We then discuss our proposed scale distillation method that allows us to improve the performance with fewer sampling steps, e.g. 1-step, in Sec. 3.2. Finally, in Sec. 3.3, we discuss how the 1-step diffusion model allows for fine-tuning a decoder directly on top of the diffusion model, with a frozen U-Net." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.531, + 0.634, + 0.546 + ], + "angle": 0, + "content": "3.1 Super-resolution with latent diffusion models" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.557, + 0.788, + 0.678 + ], + "angle": 0, + "content": "Given a training set in the form of pairs of low and high-resolution images \\((\\mathbf{x}_h,\\mathbf{x}_l)\\sim p(\\mathbf{x}_h,\\mathbf{x}_l)\\), the task of image super-resolution involves estimating the probability distribution of \\(p(\\mathbf{x}_h|\\mathbf{x}_l)\\). The stable diffusion framework uses a probabilistic diffusion model applied on the latent space of a pre-trained and frozen autoendoer. Let us assume that \\(\\mathbf{z}_h = \\mathcal{E}(\\mathbf{x}_h),\\mathbf{z}_l = \\mathcal{E}(\\mathbf{x}_l)\\) be the corresponding projection of a given low and high-resolution images \\((\\mathbf{x}_h,\\mathbf{x}_l)\\), where \\(\\mathcal{E}\\) is the pre-trained encoder. The forward process of the diffusion model, \\(q(\\mathbf{z}|\\mathbf{z}_h)\\) is a Markovian Gaussian process defined as" + }, + { + "type": "equation", + "bbox": [ + 0.339, + 0.692, + 0.785, + 0.708 + ], + "angle": 0, + "content": "\\[\nq \\left(\\mathbf {z} _ {t} \\mid \\mathbf {z} _ {h}\\right) = \\mathcal {N} \\left(\\mathbf {z} _ {t}; \\alpha_ {t} \\mathbf {z} _ {h}, \\sigma_ {t} \\mathbf {I}\\right), \\quad \\mathbf {z} = \\left\\{\\mathbf {z} _ {t} \\mid t \\in [ 0, 1 ] \\right\\} \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.719, + 0.786, + 0.78 + ], + "angle": 0, + "content": "where \\(\\mathbf{z}\\) denotes the latent variable of the diffusion model and \\(\\alpha_{t},\\sigma_{t}\\) define the noise schedule such that the log signal-to-noise ratio, \\(\\lambda_t = \\log [\\alpha_t^2 /\\sigma_t^2 ]\\) , decreases with \\(t\\) monotonically. During training, the model learns to reverse this diffusion process progressively, i.e. estimate \\(p(\\mathbf{z}_{t - 1}|\\mathbf{z}_t)\\) , to generate new data from noise." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.781, + 0.788, + 0.841 + ], + "angle": 0, + "content": "The super-resolution objective function is derived by maximizing a variational lower bound of the data log-likelihood of \\( p(\\mathbf{z}_h|\\mathbf{z}_l) \\) via approximating the backward denoising process of \\( p(\\mathbf{z}_h|\\mathbf{z}_t,\\mathbf{z}_l) \\). Note that, for super-resolution, the denoising process is conditioned on the low-resolution input, \\( \\mathbf{z}_l \\), as well. This can" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.128 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.178 + ], + "angle": 0, + "content": "be estimated by the function \\(\\hat{\\mathbf{z}}_{\\theta}(\\mathbf{z}_t,\\mathbf{z}_l,\\lambda_t)\\) parametrized by a neural network. We can train this function via a weighted mean square error loss." + }, + { + "type": "equation", + "bbox": [ + 0.361, + 0.205, + 0.786, + 0.229 + ], + "angle": 0, + "content": "\\[\n\\underset {\\theta} {\\operatorname {a r g m i n}} \\mathbb {E} _ {\\epsilon , t} [ \\omega (\\lambda_ {t}) | | \\hat {\\mathbf {z}} _ {\\theta} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l}, \\lambda_ {t}) - \\mathbf {z} _ {h} | | _ {2} ^ {2} ] \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.241, + 0.785, + 0.292 + ], + "angle": 0, + "content": "over uniformly sampled times \\( t \\in [0,1] \\) and \\( \\mathbf{z}_t = \\alpha_t \\mathbf{z}_h + \\sigma_t \\epsilon \\), \\( \\epsilon \\sim \\mathcal{N}(0,I) \\). There are several choices of weighting function \\( \\omega(\\lambda_t) \\). We use the so-called v parameterization [26], \\( (1 + \\frac{\\alpha_t^2}{\\sigma_t^2}) \\), throughout this paper." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.293, + 0.788, + 0.414 + ], + "angle": 0, + "content": "The inference process from a trained model involves a series of sequential calls, i.e. steps, of \\(\\hat{\\mathbf{z}}_{\\theta}\\), starting from \\(\\mathbf{z}_1 \\sim \\mathcal{N}(0, I)\\), where the quality of the generated image improves monotonically with the number of steps as shown in the qualitative examples of Fig .1 and quantitative results of Fig. 3. Several methods have been proposed to reduce the number of required steps at inference time [18, 19, 28]. Here, we use the widely used DDIM sampler [28], and yet see that the performance drops drastically with an extremely low number of steps. In the following, we introduce scale distillation to alleviate this shortcoming." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.436, + 0.4, + 0.451 + ], + "angle": 0, + "content": "3.2 Scale distillation" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.462, + 0.788, + 0.538 + ], + "angle": 0, + "content": "The complexity of the image super-resolution task increases with the scale factor (SF). For example, a model trained for a lower SF (\\(e.g. \\times 2\\)) takes as input a less degraded image compared to a larger SF (\\(e.g. \\times 4\\)). Therefore, a diffusion model trained for \\(\\times 2\\) magnification should require fewer inference steps to solve the HR image generation task compared to a model trained for the x4 scale factor." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.539, + 0.788, + 0.613 + ], + "angle": 0, + "content": "To alleviate the training complexity for larger scale factors, we build on this observation and propose a progressive scale distillation training strategy. In particular, we start by training a teacher for a lower SF that takes a less degraded image as input. We then use its prediction as a target to train the model for a higher factor as a student." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.614, + 0.788, + 0.735 + ], + "angle": 0, + "content": "Let \\( N \\) be the target SF of interest. Standard training involves making pairs of low and high-resolution images, where the low-resolution image is smaller than the HR image by a factor of \\( 1 / N \\). The common approach for generating the training pairs is to gather a set of high-resolution images, perform synthetic degradation to obtain the corresponding low-resolution image and train a model that directly performs \\( \\times N \\) magnification [22, 32, 34] using eq. 2. Instead, we start by training a standard diffusion-based teacher for a lower SF, using a less degraded LR image, e.g. \\( 2 / N \\), as input and use its prediction to train the student." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.735, + 0.788, + 0.84 + ], + "angle": 0, + "content": "More precisely, Let us assume \\(\\hat{\\mathbf{z}}_{\\phi}, \\hat{\\mathbf{z}}_{\\theta}\\) be the teacher and student denoising models parameterized by \\(\\phi, \\theta\\) respectively. To train the student for a factor of \\(N\\), we generate two degraded images for a given high-resolution image with factors \\(1/N, 2/N\\), with latent representations denoted by \\(\\mathbf{z}_l, \\mathbf{z}_l'\\) respectively. That means \\(\\mathbf{z}_l'\\) is less degraded compared to \\(\\mathbf{z}_l\\). Similar to the standard diffusion model training, we sample random noise at \\(t\\) and add it to the high-resolution image to obtain \\(\\mathbf{z}_t\\). The scale distillation loss will be:" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.126 + ], + "angle": 0, + "content": "7" + }, + { + "type": "image", + "bbox": [ + 0.288, + 0.144, + 0.718, + 0.359 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.369, + 0.788, + 0.495 + ], + "angle": 0, + "content": "Fig. 2: Training pipeline of proposed scale distillation. For a given HR image (e.g. size \\(512 \\times 512\\)) shown in green, we generate two degraded versions with factors of \\(2 / N, 1 / N\\) (e.g. sizes \\(256 \\times 256\\) and \\(128 \\times 128\\)), shown in yellow and red respectively. Both degraded images are resized back via bicubic upsampling to \\(512 \\times 512\\) to be used as input to the encoder, which projects them to \\(4 \\times 64 \\times 64\\) tensors. The less and more degraded LR image is used as input to the teacher and student respectively via concatenation with the noisy version of the HR image, i.e. \\(\\mathbf{z}_t\\). The teacher's output is used as the target for training the student. Note that the teacher is first trained independently for a smaller magnification scale and then frozen during student training." + }, + { + "type": "equation", + "bbox": [ + 0.324, + 0.548, + 0.786, + 0.574 + ], + "angle": 0, + "content": "\\[\n\\underset {\\theta} {\\operatorname {a r g m i n}} \\mathbb {E} _ {\\epsilon , t} [ \\omega (\\lambda_ {t}) | | \\hat {\\mathbf {z}} _ {\\theta} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l}, \\lambda_ {t}) - \\hat {\\mathbf {z}} _ {\\phi} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l} ^ {\\prime}, \\lambda_ {t}) | | _ {2} ^ {2} ] \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.584, + 0.788, + 0.703 + ], + "angle": 0, + "content": "where the teacher is trained for \\( N / 2 \\) magnification and frozen, and the student is initialized with the teacher's weights before the training. Note that we are using the latent diffusion framework that allows exactly the same architecture and input shapes for both the teacher and the student. Although the input low-resolution images for the student and teacher are of different sizes, they are both resized to a fixed size and fed to the encoder, which projects them to a tensor with a fixed size of \\( 4 \\times 64 \\times 64 \\). Fig. 2 illustrates the proposed scale distillation process." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.789, + 0.841 + ], + "angle": 0, + "content": "The idea of scale distillation is in line with that of progressive temporal distillation [26]. While a standard denoising model would only use the final image as the target irrespective of the sampled time step \\( t \\) (see Eq. 2), both scale and progressive temporal distillation rely on the teacher to provide a supervisory signal specific for step \\( t \\) (see Eq. 3). In this way, the supervisory signal is attuned to the specific denoising step, providing stable and consistent supervision at every denoising step. Fig. 3 provides empirical support for our hypothesis. We observe a significant gap between the distilled models from \\( \\times 2 \\) to \\( \\times 4 \\) and \\( \\times 2 \\) to \\( \\times 8 \\) compared to the models that are directly trained for \\( \\times 4 \\) and \\( \\times 8 \\), respectively." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.127 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "image", + "bbox": [ + 0.284, + 0.186, + 0.481, + 0.304 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.377, + 0.31, + 0.398, + 0.321 + ], + "angle": 0, + "content": "×4" + }, + { + "type": "image", + "bbox": [ + 0.512, + 0.187, + 0.711, + 0.304 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.609, + 0.31, + 0.629, + 0.321 + ], + "angle": 0, + "content": "×8" + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.331, + 0.788, + 0.444 + ], + "angle": 0, + "content": "Fig. 3: FID vs. number of DDIM steps on the DIV2K validation set obtained through bicubic degradation using SD for \\(\\times 4\\) and \\(\\times 8\\) magnifications trained with scale distillation and standard training. We use \\(\\times 2 \\rightarrow \\times 4\\) scale distillation for \\(\\times 4\\) and \\(\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8\\) for \\(\\times 8\\), and compare with the standard training directly for \\(\\times 4\\) and \\(\\times 8\\) respectively. All results are obtained using the original SD decoder. The model trained with scale distillation outperforms the standard training with large margin when using fewer steps for \\(\\times 4\\). The gap between scale distillation and the standard training is significantly higher for small \\(\\times 8\\) and remains steady for large numbers steps." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.473, + 0.785, + 0.504 + ], + "angle": 0, + "content": "The gap is especially striking when evaluated with few inference steps and, as expected, shrinks as the number of steps increases and quality saturates." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.505, + 0.788, + 0.641 + ], + "angle": 0, + "content": "Similar to the temporal progressive distillation [26], the proposed scale distillation process can be applied iteratively with higher scale factors at each training step. The first student is initialized from scratch and trained on the raw data, similar to the standard training. Consequently, this student becomes the new teacher for training the next scale factor. In this paper, we consider three distillation steps up to the scale factor of \\(\\times 8\\) starting from \\(\\times 2\\), i.e. \\(\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8\\). As it is shown in Fig. 3, scale distillation is significantly more effective for \\(\\times 8\\) magnification where the LR image is of even lower quality, thereby reinforcing the importance of our proposed progressive scale training strategy." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.663, + 0.427, + 0.679 + ], + "angle": 0, + "content": "3.3 Decoder fine-tuning" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.689, + 0.789, + 0.842 + ], + "angle": 0, + "content": "While scale distillation improves the one-step inference noticeably, there is still a gap between the one-step model and the saturated performance with a larger number of steps, see Fig. 3. To fill this gap, we propose to fine-tune the decoder on top of the frozen one-step diffusion model resulting from scale distillation. That is, after training the diffusion model, we freeze the U-Net, apply one DDIM step for a given LR image, and use it as input to fine-tune the decoder for the SR task. We use the original loss that has been used for training the autoencoder [22]. Importantly, this fine-tuning strategy with the U-Net in place is only possible with a diffusion model that can work properly with one step as enabled by our scale distillation approach; see Table. 3. We empirically show that the" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.193 + ], + "angle": 0, + "content": "combination of our scale distillation approach with decoder fine-tuning yields a one-step model that can readily compete with models requiring a large number of inference steps." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.206, + 0.788, + 0.283 + ], + "angle": 0, + "content": "Implementation details. We use Stable diffusion v1.5 as our backbone and initialize our teacher with the text-to-image model. We use our own implementation of the v-parameterization with a cosine schedule. We use 4 A100 GPUs for all our experiments and train with a batch size of 60 with a gradient accumulation factor of 4." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.304, + 0.376, + 0.321 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.334, + 0.788, + 0.427 + ], + "angle": 0, + "content": "In this section, we evaluate our YONOS-SR against other methods targeting real image super-resolution at the standard \\(\\times 4\\) scale factor in Sec. 4.1 and demonstrate that our proposed scale distillation approach generalizes to higher scale factors of \\(\\times 8\\) in Sec. 4.2. We then provide qualitative results for \\(\\times 4\\) and \\(\\times 8\\) in Sec. 4.3. Finally, we perform ablation studies to highlight the role of our main contributions in Sec. 4.4." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.447, + 0.607, + 0.463 + ], + "angle": 0, + "content": "4.1 Evaluation on real image super resolution" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.47, + 0.787, + 0.502 + ], + "angle": 0, + "content": "We begin by evaluating the performance of our proposed YONOS-SR model in the standard real image super-resolution setting targeting \\(\\times 4\\) scale factor." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.514, + 0.787, + 0.574 + ], + "angle": 0, + "content": "Datasets. Following previous work (e.g. [2,32,34,41]), we use DIV2K [1], DIV8K [7], Flickr2k [30], OST [36] and a subset of 10K images from FFHQ training set [13] to train our model. We adopt the Real-ESRGAN [34] degradation pipeline to generate synthetic LR-HR pairs." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.575, + 0.788, + 0.668 + ], + "angle": 0, + "content": "We then evaluate our model on both synthetic and real datasets. Similar to [32], we use 3K LR-HR (128 → 512) pairs synthesized from the DIV2K validation set using the Real-ESRGAN degradation pipeline as our synthetic dataset. We also report results on the standard DIV2K validation split with bicubic degradations for completeness. For the real dataset, we use \\(128 \\times 128\\) center crops from the RealSR [11], DRealSR [38] and DPED-iphone [10] datasets." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.68, + 0.787, + 0.741 + ], + "angle": 0, + "content": "Evaluation metrics. We evaluate using various perceptual and image quality metrics, including LPIPS [43], FID [9] (where applicable), as well as the no-reference image quality metric, MUSIQ [14]. For the synthetic datasets, we also report standard PSNR and SSIM metrics, for reference." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.754, + 0.788, + 0.848 + ], + "angle": 0, + "content": "Baselines. As the main contribution of our paper targets improving the inference process of diffusion-based super-resolution, our main points of comparison are diffusion-based SR models, including the recent StableSR model [32], ReshShift [40], and the original LDM model [22]. For completeness, we also include comparison to other non-diffusion-based baselines, including; RealSR [11], BSRGAN [41], RealESRGAN [34], DASR [16] and FeMaSR [2]." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.128 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "table", + "bbox": [ + 0.22, + 0.144, + 0.778, + 0.27 + ], + "angle": 0, + "content": "
DatasetsMetricsRealSRBSRGANDASRReal-ESRGAN +FeMaSRLDMResShiftStableSRYONOS (ours)
DIV2K Valid RealESRGAN degradationsFID ↓49.4944.2249.1637.6435.8726.4730.4524.4421.86
LPIPS ↓0.52760.33510.35430.31120.31990.25100.30760.31140.2310
PSNR ↑24.6224.5824.4724.2823.0623.3224.6223.2624.74
SSIM ↑0.59700.62690.63040.63720.58870.57620.62100.57260.6428
MUSIQ ↑28.5761.1955.1961.0560.8362.2763.5865.9270.30
DIV2K Valid bicubic degradationsLPIPS ↓-0.23640.16960.2284-0.23230.17750.25800.1703
PSNR ↑-27.3228.5526.65-25.4927.2421.9026.26
RealSRLPIPS ↓0.35700.26560.31340.27090.29370.31590.32790.30020.2479
MUSIQ ↑38.2663.2841.2160.3659.0658.9059.8765.8869.21
DRealSRLPIPS ↓0.39380.28580.30990.28180.31570.33790.38700.32840.2721
MUSIQ ↑26.9357.1642.4154.2653.7153.7254.1358.5166.26
DPED-iphoneMUSIQ ↑45.6045.8932.6842.4249.9544.2338.5950.4859.45
-# STEPS ↓-----20042001
" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.271, + 0.785, + 0.311 + ], + "angle": 0, + "content": "Table 1: Comparison to baselines. Results in Red and Blue correspond to best and second best results, resp. Cells with - indicate that there were no previously reported results using the considered baseline and corresponding metric." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.345, + 0.785, + 0.451 + ], + "angle": 0, + "content": "Results. Results summarized in Tab. 1 show that YONOS-SR outperforms all other diffusion-based SR methods, while using only one inference step, whereas other alternatives use 200 inference steps. These results highlight the efficiency of YONOS-SR in reducing the number of steps to one without compromising performance but indeed improving it further. Also, our model outperforms all considered baselines in 5 out of 7 metrics on the synthetic data and all comparison points on the real datasets." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.471, + 0.57, + 0.487 + ], + "angle": 0, + "content": "4.2 Generalization to higher scale factors" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.494, + 0.787, + 0.675 + ], + "angle": 0, + "content": "We now evaluate the generalization capability of our proposed scale distillation approach. To this end, we train our YONOS-SR model with one more iteration of scale distillation, thereby going from a model capable of handling \\(\\times 4\\) magnifications to \\(\\times 8\\) magnifications. We then fine-tune the decoder on top of the one-step \\(\\times 8\\) diffusion model. To evaluate this model, we follow recent work [3], and evaluate on the same subset of ImageNet and FFHQ for \\(\\times 8\\) magnification, i.e. \\(64 \\times 64 \\rightarrow 512 \\times 512\\). In particular, we select the same 1k subset of ImageNet test set by first ordering the 10k images by name and then selecting the 1k subset via interleaved sampling, i.e. using images of index 0, 10, 20, etc. To obtain the LR-HR pairs, we use \\(\\times 8\\) average pooling degradations. In the case of FFHQ, we use the first 1k images of the validation set. We also evaluate using the same metrics and baselines reported in this recent work [3]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.676, + 0.787, + 0.766 + ], + "angle": 0, + "content": "The results summarized in Tab. 2 demonstrate that our proposed one-step method generalizes well to higher scale factors, where it is able to achieve good results in terms of FID and LPIPS scores, which are known to better align with human observation, especially at higher magnification factors [24]. Notably, unlike baselines, our model has not been trained on ImageNet data. We use only \\(10\\mathrm{k}\\) images of FFHQ in our training set." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.787, + 0.444, + 0.802 + ], + "angle": 0, + "content": "4.3 Qualitative evaluation" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.785, + 0.84 + ], + "angle": 0, + "content": "In addition to extensive quantitative evaluations, we qualitatively compare one-step YONOS-SR with 200-step StableSR and standard diffusion-based SR (SD-" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "image", + "bbox": [ + 0.245, + 0.171, + 0.366, + 0.265 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.296, + 0.268, + 0.317, + 0.282 + ], + "angle": 0, + "content": "(a)" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.173, + 0.488, + 0.265 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.418, + 0.268, + 0.439, + 0.282 + ], + "angle": 0, + "content": "(b)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.173, + 0.61, + 0.265 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.541, + 0.268, + 0.56, + 0.282 + ], + "angle": 0, + "content": "(c)" + }, + { + "type": "image", + "bbox": [ + 0.612, + 0.173, + 0.732, + 0.266 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.662, + 0.268, + 0.682, + 0.282 + ], + "angle": 0, + "content": "(d)" + }, + { + "type": "image", + "bbox": [ + 0.245, + 0.31, + 0.365, + 0.402 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.296, + 0.406, + 0.316, + 0.419 + ], + "angle": 0, + "content": "(a)" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.31, + 0.488, + 0.402 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.418, + 0.406, + 0.438, + 0.418 + ], + "angle": 0, + "content": "(b)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.31, + 0.609, + 0.402 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.541, + 0.406, + 0.559, + 0.418 + ], + "angle": 0, + "content": "(c)" + }, + { + "type": "image", + "bbox": [ + 0.612, + 0.31, + 0.732, + 0.402 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.662, + 0.406, + 0.682, + 0.418 + ], + "angle": 0, + "content": "(d)" + }, + { + "type": "image", + "bbox": [ + 0.245, + 0.447, + 0.365, + 0.54 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.296, + 0.543, + 0.316, + 0.557 + ], + "angle": 0, + "content": "(a)" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.447, + 0.488, + 0.54 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.418, + 0.543, + 0.438, + 0.557 + ], + "angle": 0, + "content": "(b)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.447, + 0.61, + 0.54 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.541, + 0.543, + 0.559, + 0.557 + ], + "angle": 0, + "content": "(c)" + }, + { + "type": "image", + "bbox": [ + 0.612, + 0.447, + 0.732, + 0.54 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.662, + 0.543, + 0.682, + 0.557 + ], + "angle": 0, + "content": "(d)" + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.567, + 0.788, + 0.664 + ], + "angle": 0, + "content": "Fig. 4: Qualitative comparison on the validation set of DIV2K dataset: (a) 200-step StableSR (b) 200-step standard SD-SR (c) 1-step YONOS(ours) (d) the ground truth. SD-SR represents the standard Stable Diffusion-based SR model. 200-step StableSR and SD-SR tend to over-sharpen, adding artifacts that do not match the ground truth content. Our SR images match the most with the corresponding ground truth image; see the faces, Pepsi, and crocodile textures in the first, second, and third rows, respectively. The images are best seen in a display and zoomed in." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.685, + 0.788, + 0.745 + ], + "angle": 0, + "content": "SR) in Fig. 4. Our method generates the closest SR images to the ground truth in terms of detailed textures while taking only 1-step during the inference. These observations are in line with the numerical superiority of our method in the quantitative evaluations above." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.788, + 0.84 + ], + "angle": 0, + "content": "As it is clearly demonstrated in Fig. 3, scale distillation is even more effective for \\(\\times 8\\) compared to \\(\\times 4\\) magnification. As a qualitative support, we compare the model trained directly for \\(\\times 8\\) magnification without scale distillation to our model trained with three iterations of scale distillation \\(\\times 2\\rightarrow \\times 4\\rightarrow \\times 8\\) in Fig. 5. Again, we use the validation set of DIV2K dataset. In line with the numerical analyses in Fig. 3, we observe that the model trained with scale distillation out-" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.127 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.181, + 0.338, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.261, + 0.279, + 0.295, + 0.293 + ], + "angle": 0, + "content": "(LR)" + }, + { + "type": "image_caption", + "bbox": [ + 0.344, + 0.179, + 0.361, + 0.273 + ], + "angle": 0, + "content": "8 8" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.181, + 0.489, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.396, + 0.279, + 0.461, + 0.293 + ], + "angle": 0, + "content": "(64 steps)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.183, + 0.61, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.519, + 0.279, + 0.577, + 0.293 + ], + "angle": 0, + "content": "(4 steps)" + }, + { + "type": "image", + "bbox": [ + 0.612, + 0.183, + 0.73, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.644, + 0.279, + 0.696, + 0.293 + ], + "angle": 0, + "content": "(1 step)" + }, + { + "type": "image", + "bbox": [ + 0.217, + 0.319, + 0.339, + 0.413 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.261, + 0.416, + 0.296, + 0.43 + ], + "angle": 0, + "content": "(HR)" + }, + { + "type": "image_caption", + "bbox": [ + 0.34, + 0.319, + 0.368, + 0.404 + ], + "angle": 0, + "content": "eannnnnne" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.32, + 0.489, + 0.413 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.396, + 0.416, + 0.461, + 0.43 + ], + "angle": 0, + "content": "(64 steps)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.32, + 0.609, + 0.413 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.519, + 0.416, + 0.577, + 0.43 + ], + "angle": 0, + "content": "(4 steps)" + }, + { + "type": "image", + "bbox": [ + 0.611, + 0.32, + 0.73, + 0.413 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.644, + 0.416, + 0.696, + 0.43 + ], + "angle": 0, + "content": "(1 step)" + }, + { + "type": "image", + "bbox": [ + 0.217, + 0.455, + 0.339, + 0.55 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.261, + 0.553, + 0.295, + 0.567 + ], + "angle": 0, + "content": "(LR)" + }, + { + "type": "image_caption", + "bbox": [ + 0.344, + 0.454, + 0.362, + 0.548 + ], + "angle": 0, + "content": "aee" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.457, + 0.489, + 0.55 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.396, + 0.553, + 0.461, + 0.567 + ], + "angle": 0, + "content": "(64 steps)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.457, + 0.609, + 0.55 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.519, + 0.553, + 0.577, + 0.567 + ], + "angle": 0, + "content": "(4 steps)" + }, + { + "type": "image", + "bbox": [ + 0.611, + 0.457, + 0.73, + 0.55 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.644, + 0.553, + 0.696, + 0.567 + ], + "angle": 0, + "content": "(1 step)" + }, + { + "type": "image", + "bbox": [ + 0.217, + 0.593, + 0.339, + 0.686 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.261, + 0.69, + 0.295, + 0.704 + ], + "angle": 0, + "content": "(HR)" + }, + { + "type": "image_caption", + "bbox": [ + 0.34, + 0.593, + 0.368, + 0.679 + ], + "angle": 0, + "content": "Scale distillation \\(\\times 2\\uparrow \\uparrow \\times 4\\times 8\\)" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.594, + 0.489, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.396, + 0.69, + 0.461, + 0.704 + ], + "angle": 0, + "content": "(64 steps)" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.594, + 0.609, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.519, + 0.69, + 0.577, + 0.704 + ], + "angle": 0, + "content": "(4 steps)" + }, + { + "type": "image", + "bbox": [ + 0.611, + 0.594, + 0.73, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.644, + 0.69, + 0.696, + 0.704 + ], + "angle": 0, + "content": "(1 step)" + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.714, + 0.788, + 0.826 + ], + "angle": 0, + "content": "Fig. 5: Qualitative comparison on the validation set of DIV2K dataset for \\(\\times 8\\) magnification when the model is trained directly for \\(\\times 8\\) magnification without scale distillation (top row) and with three iterations of scale distillation \\(\\times 2\\rightarrow \\times 4\\rightarrow \\times 8\\) (bottom row). We show the input LR image results with 1, 4, and 64 steps using the original decoder and the corresponding HR image for both models. The model trained with scale distillation outperforms the standard training with high margins. Specifically, due to poor LR input, the standard training fails to recover the relevant content. The images are best seen in a display and zoomed in." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "table", + "bbox": [ + 0.292, + 0.144, + 0.71, + 0.252 + ], + "angle": 0, + "content": "
ImagenetFFHQ
FID ↓LPIPS ↓PSNR ↑FID ↓LPIPS ↓PSNR ↑
LDPS61.090.47523.2136.810.29228.78
GML-DPS [23]60.360.45623.2141.650.31828.50
PSLD [23]60.810.47123.1736.930.33526.62
LDIR [8]63.460.48022.2336.040.34525.79
P2L [3]51.810.38623.3831.230.29028.55
YONOS (ours)34.590.24122.8021.410.16126.08
" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.253, + 0.784, + 0.281 + ], + "angle": 0, + "content": "Table 2: Comparison to baselines on ImageNet subset with x8 magnification factor. The results for other methods are taken from [3]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.288, + 0.785, + 0.363 + ], + "angle": 0, + "content": "performs the standard training in terms of recovering the corresponding content and details. Note that, the problem of \\(\\times 8\\) magnification is of significantly higher complexity compared to \\(\\times 4\\) due to poor LR input. Notable for these \\(\\times 8\\) qualitative evaluations we use the original decoder (i.e. these results are obtained before the decoder finetuning stage) to emphasize the impact of scale distillation." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.38, + 0.386, + 0.396 + ], + "angle": 0, + "content": "4.4 Ablation study" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.404, + 0.785, + 0.554 + ], + "angle": 0, + "content": "We now study the impact of the various components introduced in our work. To this end, we use the standard DIV2K validation set with \\(\\times 4\\) low-resolution images obtained through bicubic degradation [1]. We use the FID metric as it is a standard metric for assessing the quality of generative models. Our initial investigation also revealed that FID correlates the most with the human evaluation of the generated images. The validation set of the DIV2K dataset includes only 100 samples. To obtain more reliable FID scores, we extract 30 random \\(128 \\times 128\\) patches and their corresponding \\(512 \\times 512\\) HR counterparts from each image in the standard DIV2K bicubic validation set, resulting in a total of 3k LR-HR pairs. For completeness, we also report LPIPS, PSNR, and SSIM scores." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.569, + 0.785, + 0.78 + ], + "angle": 0, + "content": "Impact of scale distillation. We begin by evaluating the impact of our proposed scale distillation on speeding up inference time. To this end, we run two stable diffusion (SD) models trained for \\(\\times 4\\) super-resolution (SR), with various numbers of inference steps. The first model is a standard SD super-resolution model trained directly for target \\(\\times 4\\) super-resolution (i.e. SD-SR), while the second model is trained with our proposed scale distillation from \\(\\times 2\\) magnification to \\(\\times 4\\). We use the same model, training set, and degradation pipeline in training both models. The only difference is the use of our scale distillation in the later model. Specifically, we start with training a teacher for \\(\\times 2\\) magnification using raw data as a denoising target. We use the \\(\\times 2\\) model as a frozen teacher and use its prediction to train a student for \\(\\times 4\\) magnification. The results summarized in Fig. 3 speaks decisively in favor of our scale distillation approach. We can see that the model trained with the proposed scale distillation performs significantly better than direct \\(\\times 4\\) training when using only one step." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.785, + 0.84 + ], + "angle": 0, + "content": "Scale distillation outperforms the standard training more significantly for \\(\\times 8\\) magnification where we perform three training iterations for scale distillation, i.e. \\(\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8\\). One reason for the larger gap for \\(\\times 8\\) magnification is that the SR task is more ambiguous for \\(\\times 8\\) magnification due to lower quality input." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.128 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.208 + ], + "angle": 0, + "content": "As a result, the model benefits more from the more simplified supervisory signal obtained from scale distillation. Note that we use the original SD decoder (i.e. no decoder finetuning) for this experiment to analyze the impact of the scale distillation independently of decoder fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.226, + 0.496, + 0.437 + ], + "angle": 0, + "content": "Impact of decoder fine-tuning. One of the direct consequences of having a diffusion model that can yield good results in one denoising step is that it allows for decoder fine-tuning with the U-Net in place, as it will directly give a good starting point to the decoder. To validate the importance of the input given to the decoder prior to fine-tuning and, thereby, the importance of YONOS-SR, we experiment with the standard SD-SR model and our scale distillation model. In both cases, we freeze the U-Net and only allow the" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.438, + 0.787, + 0.468 + ], + "angle": 0, + "content": "models to do 1 denoising step. We then feed their output to the decoder and fine-tune it following the same loss used in the original stable diffusion model [22]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.469, + 0.788, + 0.665 + ], + "angle": 0, + "content": "The results summarized in Tab. 3 validate the importance of having a good initial input from the diffusion model prior to decoder fine-tuning. The left chunk shows that the model trained with scale distillation outperforms the standard training with a good margin when using the original decoder, indicating that the scale distillation results in a U-Net that provides a higher quality input for the decoder. Moreover, as we can see in the right chunk of Tab. 3, fine-tuning the decoder on top of both 1-step models improves the performance. However, the model with scale distillation yields significantly better results than the standard SD-SR directly trained for the target magnification. Once again, the impact of scale distillation is more sensible for \\(\\times 8\\) magnification than \\(\\times 4\\), which highlights the importance of our approach in such difficult settings. Importantly, this fine-tuning strategy is not computationally feasible with diffusion models that require many denoising steps to give a reasonable starting point for the decoder." + }, + { + "type": "table", + "bbox": [ + 0.534, + 0.238, + 0.758, + 0.352 + ], + "angle": 0, + "content": "
DecoderOriginalFine-tuned
Scale distillationXX
FID ↓27.9323.9616.2615.54
LPIPS ↓0.2270.2070.1630.159
PSNR ↑25.9426.2425.7326.30
SSIM ↑0.7110.7140.7130.727
FID ↓102.9266.9041.5428.47
LPIPS ↓0.5410.4030.3050.243
PSNR ↑21.0824.4621.5323.96
SSIM ↑0.5410.6470.5280.632
" + }, + { + "type": "table_caption", + "bbox": [ + 0.504, + 0.363, + 0.788, + 0.406 + ], + "angle": 0, + "content": "Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.688, + 0.36, + 0.705 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.84 + ], + "angle": 0, + "content": "In summary, in this paper, we introduced the first fast stable diffusion-based super-resolution method. To achieve this, we introduced scale distillation, an approach that allows us to tackle the SR problem in as little as one step. Having a fast diffusion model allowed us to directly fine-tune the decoder, which we show yields state-of-the-art results, even at high magnification factors and only using a single step. We hope that the proposed distillation approach could be adapted for other inverse imaging problems (e.g. image inpainting), which we believe is an interesting direction for future research." + }, + { + "type": "table_caption", + "bbox": [ + 0.504, + 0.363, + 0.788, + 0.406 + ], + "angle": 0, + "content": "Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.323, + 0.16 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.179, + 0.785, + 0.221 + ], + "angle": 0, + "content": "1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.223, + 0.785, + 0.263 + ], + "angle": 0, + "content": "2. Chen, C., Shi, X., Qin, Y., Li, X., Han, X., Yang, T., Guo, S.: Real-world blind super-resolution via feature matching with implicit high-resolution priors. In: ACM International Conference on Multimedia (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.265, + 0.785, + 0.292 + ], + "angle": 0, + "content": "3. Chung, H., Ye, J.C., Milanfar, P., Delbracio, M.: Prompt-tuning latent diffusion models for inverse problems. In: arXiv preprint arXiv: 2310.01110 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.294, + 0.785, + 0.32 + ], + "angle": 0, + "content": "4. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.322, + 0.785, + 0.362 + ], + "angle": 0, + "content": "5. Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world superresolution. In: IEEE International Conference on Computer Vision - Workshops (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.364, + 0.785, + 0.404 + ], + "angle": 0, + "content": "6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances on Neural Information Processing Systems (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.406, + 0.785, + 0.446 + ], + "angle": 0, + "content": "7. Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., Timofte, R.: Div8k: Diverse 8k resolution image dataset. In: IEEE International Conference on Computer Vision - Workshops (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.448, + 0.785, + 0.488 + ], + "angle": 0, + "content": "8. He, L., Yan, H., Luo, M., Luo, K., Wang, W., Du, W., Chen, H., Yang, H., Zhang, Y.: Iterative reconstruction based on latent diffusion model for sparse data reconstruction. In: arXiv preprint arXiv:2307.12070 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.49, + 0.785, + 0.531 + ], + "angle": 0, + "content": "9. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances on Neural Information Processing Systems (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.533, + 0.785, + 0.573 + ], + "angle": 0, + "content": "0. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Gool, L.V.: Dslr-quality photos on mobile devices with deep convolutional networks. In: IEEE International Conference on Computer Vision (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.575, + 0.785, + 0.614 + ], + "angle": 0, + "content": "1. Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., Huang, F.: Real-world super-resolution via kernel estimation and noise injection. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.616, + 0.785, + 0.657 + ], + "angle": 0, + "content": "2. Jolicoeur-Martineau, A., Li, K., Piché-Taillefer, R., Kachman, T., Mitliagkas, I.: Gotta go fast when generating data with score-based models. In: arXiv preprint arXiv:2105.14080 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.66, + 0.785, + 0.699 + ], + "angle": 0, + "content": "3. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.701, + 0.785, + 0.728 + ], + "angle": 0, + "content": "4. Ke, J., Wang, Q., Wang, Y., Milanfar, P., Yan, F.: Musiq: Multi-scale image quality transformer. In: IEEE International Conference on Computer Vision (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.729, + 0.785, + 0.77 + ], + "angle": 0, + "content": "5. Liang, J., Zhang, K., Gu, S., Van Gool, L., Timofte, R.: Flow-based kernel prior with application to blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.772, + 0.785, + 0.811 + ], + "angle": 0, + "content": "6. Liang, J., Zeng, H., Zhang, L.: Efficient and degradation-adaptive network for real-world image super-resolution. In: European Conference on Computer Vision (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "7. Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image superresolution: A survey and beyond. In: arXiv preprint arXiv:2107.03055 (2021)" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.179, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.388, + 0.128 + ], + "angle": 0, + "content": "M. Noroozi et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.785, + 0.189 + ], + "angle": 0, + "content": "18. Lu, C., Zhou, Y., Bao, F., Chen, J., LI, C., Zhu, J.: Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In: Advances on Neural Information Processing Systems (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.19, + 0.785, + 0.232 + ], + "angle": 0, + "content": "19. Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., Zhu, J.: Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. In: arxiv prepring arxiv: 2211.01095 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.232, + 0.785, + 0.259 + ], + "angle": 0, + "content": "20. Maeda, S.: Unpaired image super-resolution using pseudo-supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.26, + 0.785, + 0.3 + ], + "angle": 0, + "content": "21. Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J., Salimans, T.: On distillation of guided diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.301, + 0.785, + 0.342 + ], + "angle": 0, + "content": "22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.343, + 0.785, + 0.383 + ], + "angle": 0, + "content": "23. Rout, L., Raoof, N., Daras, G., Caramanis, C., and Sanjay Shakkottai, A.G.D.: Solving linear inverse problems provably via posterior sampling with latent diffusion models. In: NeurIPS (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.384, + 0.785, + 0.424 + ], + "angle": 0, + "content": "24. Sahak, H., Watson, D., Sahara, C., Fleet, D.: Denoising diffusion probabilistic models for robust image super-resolution in the wild. In: arXiv preprint arXiv: 2302.07864 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.425, + 0.785, + 0.452 + ], + "angle": 0, + "content": "25. Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D.J., Norouzi, M.: Image superresolution via iterative refinement. preprint arXiv: 2104.07636 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.453, + 0.785, + 0.48 + ], + "angle": 0, + "content": "26. Salimans, T., Ho, J.: Progressive distillation for fast sampling of diffusion models. In: International Conference on Learning Representations (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.481, + 0.785, + 0.508 + ], + "angle": 0, + "content": "27. Shocher, A., Cohen, N., Irani, M.: \"zero-shot\" superresolution using deep internal learning. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.509, + 0.785, + 0.536 + ], + "angle": 0, + "content": "28. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.537, + 0.785, + 0.563 + ], + "angle": 0, + "content": "29. Song, Y., Dhariwal, P., Chen, M., Sutskever, I.: Consistency models. arXiv preprint arXiv:2303.01469 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.565, + 0.785, + 0.605 + ], + "angle": 0, + "content": "30. Timofte, R., Agustsson, E., Gool, L.V., Yang, M., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.606, + 0.785, + 0.646 + ], + "angle": 0, + "content": "31. Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., Wen, F.: Bringing old photos back to life. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.647, + 0.785, + 0.674 + ], + "angle": 0, + "content": "32. Wang, J., Yue, Z., Zhou, S., Chan, K.C., Loy, C.C.: Exploiting diffusion prior for real-world image super-resolution. In: arXiv preprint arXiv:2305.07015 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.675, + 0.785, + 0.715 + ], + "angle": 0, + "content": "33. Wang, L., Wang, Y., Dong, X., Xu, Q., Yang, J., An, W., Guo, Y.: Unsupervised degradation representation learning for blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.716, + 0.785, + 0.757 + ], + "angle": 0, + "content": "34. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In: IEEE International Conference on Computer Vision - Workshops (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.758, + 0.785, + 0.799 + ], + "angle": 0, + "content": "35. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.8, + 0.785, + 0.84 + ], + "angle": 0, + "content": "36. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.65, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "YONOS-SR" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "37. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: ESRGAN: Enhanced super-resolution generative adversarial networks. In: European Conference on Computer Vision - Workshops (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.19, + 0.788, + 0.231 + ], + "angle": 0, + "content": "38. Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., Lin, L.: Component divide-and-conquer for real-world image super-resolution. In: European Conference on Computer Vision (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.231, + 0.788, + 0.272 + ], + "angle": 0, + "content": "39. Yan, Y., Liu, C., Chen, C., Sun, X., Jin, L., Peng, X., Zhou, X.: Fine-grained attention and feature-sharing generative adversarial networks for single image superresolution. In: IEEE Transactions on Multimedia (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.272, + 0.788, + 0.3 + ], + "angle": 0, + "content": "40. Yue, Z., Wang, J., Change Loy, C.: Ressift: Efficient diffusion model for image super-resolution by residual shifting. In: NeurIPS (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.3, + 0.788, + 0.342 + ], + "angle": 0, + "content": "41. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. In: IEEE International Conference on Computer Vision (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.342, + 0.788, + 0.37 + ], + "angle": 0, + "content": "42. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: IEEE International Conference on Computer Vision (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.37, + 0.788, + 0.411 + ], + "angle": 0, + "content": "43. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.411, + 0.788, + 0.453 + ], + "angle": 0, + "content": "44. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (2017)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.453 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_origin.pdf b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2db7cf192100233803252b5ad7fb78d2ab615798 --- /dev/null +++ b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/7a33cdc6-3a74-416b-8ff2-7188fb393357_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9780bbf04fbe3039e31516d3f5182596a92e0d3e0fb3f3e3bb14390509c146a2 +size 23856684 diff --git a/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/full.md b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d0bb32864fdd710c3c659d7188e206d91c51cae8 --- /dev/null +++ b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/full.md @@ -0,0 +1,311 @@ +# You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation + +Mehdi Noroozi, Isma Hadji, Brais Martinez, Adrian Bulat, and Georgios Tzimiropoulos + +Samsung AI Cambridge {m.noroozi,isma.hadji}@samsung.com + +Abstract. In this paper, we introduce YONOS-SR, a novel stable diffusion based approach for image super-resolution that yields state-of-the-art results using only a single DDIM step. Specifically, we propose a novel scale distillation approach to train our SR model. Instead of directly training our SR model on the scale factor of interest, we start by training a teacher model on a smaller magnification scale, thereby making the SR problem simpler for the teacher. We then train a student model for a higher magnification scale, using the predictions of the teacher as a target during the training. This process is repeated iteratively until we reach the target scale factor of the final model. The rationale behind our scale distillation is that the teacher aids the student diffusion model training by i) providing a target adapted to the current noise level rather than using the same target coming from ground truth data for all noise levels and ii) providing an accurate target as the teacher has a simpler task to solve. We empirically show that the distilled model significantly outperforms the model trained for high scales directly, especially with few steps during inference. Having a strong diffusion model that requires only one step allows us to freeze the U-Net and fine-tune the decoder on top of it. We show that the combination of spatially distilled U-Net and fine-tuned decoder outperforms state-of-the-art methods requiring 200 steps with only one single step. $^{1}$ + +# 1 Introduction + +Diffusion models have shown impressive performance in various image generation tasks [22, 42], including image super-resolution (SR) [3, 24, 25, 32]. However, the large number of sequential denoising passes required by the sampling strategy results in extreme computational cost, even for stable diffusion-based models (SD) that operate in the latent space of an autoencoder. Recently, several approaches have been proposed to reduce the number of sampling steps [18, 26, 28, 29]. Unfortunately, such approaches usually compromise performance, especially for the lower number of steps. + +![](images/e0877620c059e60467c9cf464af9e74fd37a5664c311d4207ed7b9f81f8b29cd.jpg) +Fig. 1: Qualitative comparison for $\times 4$ and $\times 8$ magnifications. Each column shows top to bottom LR input image, 1 and 200 step SD-SR, 1-step YONOS-SR(ours). SD-SR represents the standard Stable Diffusion-based SR model. The 1-step SD-SR method lacks quality in terms of detailed textures compared to 200-steps of the same model; see building texture in the first column and hairs in the middle column. In contrast, our method outperforms 200-steps SD-SR with only one step, especially for $\times 8$ magnification where SD-SR fails to recover the details even with 200 steps. Samples are taken from DIV2K validation set. Images are best seen in a display and zoomed in. + +Typically, diffusion-based models yield the best results on image patches of similar sizes to those seen during training (e.g. $64 \times 64$ for SD [22]). On the other hand, super-resolution applications require operating in high-resolution settings, drastically exacerbating the computational issues of diffusion-based models. For example, a SR model that aims for a magnification of $\times 4$ going from $256 \times 256$ to $1024 \times 1024$ requires dividing the input image into 16 patches of $64 \times 64$ and running the model on each patch individually, making a large number of steps prohibitive for realistic use cases. Using state-of-the-art step-reduction strategy, such as more efficient samplers [18, 19, 28] can partially alleviate this issue but still falls widely short of practical needs. For example, going down to the target of 1 DDIM step results in a significant drop in performance compared to a typical model that does 200 inference steps, as shown in Fig. 1. + +One differentiating characteristic of the super-resolution task is that it is conditioned on the low-resolution (LR) input image to yield the target high-resolution (HR) image. Unlike the task of text-to-image generation, which relies on text conditioning, the LR image provides closer content to the target HR image, especially at lower scale factors. Therefore, conditioning the diffusion model on the LR image at low-scale factors makes the task inherently simpler for the diffusion model. In this paper, we take advantage of this peculiarity and introduce a novel training strategy dubbed scale distillation. While typical diffusion-based SR methods train the model for super-resolution by conditioning directly on the LR image at the target scale factor, we instead propose a progressive training approach, where we start by training a model for lower scale factors (i.e. where the conditioning signal is closer to the target) and progressively increase to the target scale factor using the previously trained model as a teacher. + +More specifically, instead of using the raw data to train a model for large scale factors, scale distillation obtains a rich and accurate supervisory signal from a teacher trained for a smaller scale factor. We first train a teacher that takes a less degraded image as input and, therefore, has an easier task to solve during training. Then, we train a model for a larger scale factor as a student while initializing it with the same weights as the teacher, which is now frozen. For a given time step during the training, we feed both teacher and student with the same noisy version of the HR image. However, we condition the teacher with the less degraded LR image (i.e. using the same scale that was used during teacher training), while we condition the student on the target (more degraded) LR image. We then use the teacher's prediction as a target to train the student. + +This training strategy has two direct advantages: i) Unlike typical training where the supervisory signal is somewhat ambiguous as the target is the same for all noise levels, our student receives its target from the teacher and is therefore adaptive to the noise level. ii) The target is more accurate, especially in terms of the finer detail, because the teacher takes a less degraded LR image as input. + +The proposed scale distillation approach allows the model to solve the SR task in fewer steps as we have simplified the task for the student. In fact, we show that models trained with our approach improve significantly when a few steps are used during the inference, e.g. one step, see Fig. 3. Therefore, a direct + +advantage of the proposed approach is that fine-tuning the decoder directly on top of the diffusion model becomes computationally tractable due to the single inference step required. Taking advantage of this fine-tuning, we show that You Only Need One Step (YONOS)-SR outperforms state-of-the-art diffusion-based SR methods that require a large number (e.g. 200) of inference steps. + +In summary, our contributions are threefold: I) We introduce scale distillation to train SD models with a more accurate and fine supervisory signal for image super-resolution tasks. II) We show that our proposed scale distillation strategy yields more efficient SD models that allow for directly fine-tuning the decoder on top of a frozen one-step diffusion model. III) We show that combining scale distillation followed by decoder fine-tuning yields state-of-the-art results on the SR task, even at high magnification factors, while requiring only one step. + +# 2 Related work + +Real image super-resolution. Image super-resolution entails restoring a High Resolution (HR) image given its Low Resolution (LR) observation. Solving this task for real images is especially challenging given the dramatic differences in real-world image distributions [10, 11, 17, 38]. These differences arise from varying image degradation processes, different imaging devices, and image signal processing methods, all of which are difficult to properly model and generalize. For this reason, real image super-resolution (or blind super-resolution) has received significant interest among the research community [11, 16, 32-34, 37, 38, 41]. While some methods attempt to learn the degradation process [5, 20, 31, 39], their success remains limited due to the lack of proper large scale training data [17], even while using some unsupervised methods [44]. In contrast, more popular approaches tackle the problem by explicitly modeling the degradation pipeline to create synthetic LR-HR pairs to use for training [15, 27, 34, 41]. Given, the wider success of the explicit degradation modeling approach, we elect to rely on the widely used RealESRGAN degradation pipeline [34] in training our model. + +Diffusion-based super-resolution. Since the early SRCNN [4] method, many deep learning-based solutions for blind super-resolution have been proposed [2, 11, 22, 24, 25, 34, 37, 41, 44]. Early work took advantage of this space by using semantic segmentation probability maps for guiding SR [35]. Most recent methods aim at taking advantage of learned generative priors to simplify the inverse imaging problem of blind image super-resolution. Usually, methods following this paradigm [34, 37, 41] rely on GANs [6]. More recently, diffusion models showed remarkable generative capabilities yielding impressive results across a range of applications [22, 42]. As such, in this paper, we follow several recent works [22, 24, 25, 32] and rely on diffusion-based generative models to tackle the super-resolution problem. While diffusion-based models achieve impressive results, their main shortcoming is the long inference time. Diffusion-based models require several inference steps through the model to yield a final output, thereby limiting their practical use. Therefore, in this paper, we tackle the important + +problem of speeding up the inference of diffusion-based super-resolution. + +Guided distillation. Recognizing the inference speed shortcoming of diffusion models, several works have been proposed recently to address this issue [18, 19, 21, 26, 28]. These methods can be categorized into two main tacks. One approach tackles this problem at inference time by either proposing more efficient samplers [12, 28] or relying on higher-order solvers [18, 19]. More closely related to ours are methods that aim at directly training a diffusion model that can solve the generative problem at hand in fewer steps through temporal distillation [21, 26, 29]. Our method tackles the problem at training time as well but we propose scale distillation. Our main idea is to reduce the inference speed by progressively making the generative problem easier during training. Notably, our approach is orthogonal to temporal distillation and can be used in tandem with it. + +# 3 YONOS-SR + +In this section, we describe YONOS-SR, our diffusion-based model for image super-resolution. First, we present an overview of the image super-resolution framework with the latent diffusion models in Sec. 3.1. We then discuss our proposed scale distillation method that allows us to improve the performance with fewer sampling steps, e.g. 1-step, in Sec. 3.2. Finally, in Sec. 3.3, we discuss how the 1-step diffusion model allows for fine-tuning a decoder directly on top of the diffusion model, with a frozen U-Net. + +# 3.1 Super-resolution with latent diffusion models + +Given a training set in the form of pairs of low and high-resolution images $(\mathbf{x}_h,\mathbf{x}_l)\sim p(\mathbf{x}_h,\mathbf{x}_l)$ , the task of image super-resolution involves estimating the probability distribution of $p(\mathbf{x}_h|\mathbf{x}_l)$ . The stable diffusion framework uses a probabilistic diffusion model applied on the latent space of a pre-trained and frozen autoendoer. Let us assume that $\mathbf{z}_h = \mathcal{E}(\mathbf{x}_h),\mathbf{z}_l = \mathcal{E}(\mathbf{x}_l)$ be the corresponding projection of a given low and high-resolution images $(\mathbf{x}_h,\mathbf{x}_l)$ , where $\mathcal{E}$ is the pre-trained encoder. The forward process of the diffusion model, $q(\mathbf{z}|\mathbf{z}_h)$ is a Markovian Gaussian process defined as + +$$ +q \left(\mathbf {z} _ {t} \mid \mathbf {z} _ {h}\right) = \mathcal {N} \left(\mathbf {z} _ {t}; \alpha_ {t} \mathbf {z} _ {h}, \sigma_ {t} \mathbf {I}\right), \quad \mathbf {z} = \left\{\mathbf {z} _ {t} \mid t \in [ 0, 1 ] \right\} \tag {1} +$$ + +where $\mathbf{z}$ denotes the latent variable of the diffusion model and $\alpha_{t},\sigma_{t}$ define the noise schedule such that the log signal-to-noise ratio, $\lambda_t = \log [\alpha_t^2 /\sigma_t^2 ]$ , decreases with $t$ monotonically. During training, the model learns to reverse this diffusion process progressively, i.e. estimate $p(\mathbf{z}_{t - 1}|\mathbf{z}_t)$ , to generate new data from noise. + +The super-resolution objective function is derived by maximizing a variational lower bound of the data log-likelihood of $p(\mathbf{z}_h|\mathbf{z}_l)$ via approximating the backward denoising process of $p(\mathbf{z}_h|\mathbf{z}_t,\mathbf{z}_l)$ . Note that, for super-resolution, the denoising process is conditioned on the low-resolution input, $\mathbf{z}_l$ , as well. This can + +be estimated by the function $\hat{\mathbf{z}}_{\theta}(\mathbf{z}_t,\mathbf{z}_l,\lambda_t)$ parametrized by a neural network. We can train this function via a weighted mean square error loss. + +$$ +\underset {\theta} {\operatorname {a r g m i n}} \mathbb {E} _ {\epsilon , t} [ \omega (\lambda_ {t}) | | \hat {\mathbf {z}} _ {\theta} (\mathbf {z} _ {t}, \mathbf {z} _ {l}, \lambda_ {t}) - \mathbf {z} _ {h} | | _ {2} ^ {2} ] \tag {2} +$$ + +over uniformly sampled times $t \in [0,1]$ and $\mathbf{z}_t = \alpha_t \mathbf{z}_h + \sigma_t \epsilon$ , $\epsilon \sim \mathcal{N}(0,I)$ . There are several choices of weighting function $\omega(\lambda_t)$ . We use the so-called v parameterization [26], $(1 + \frac{\alpha_t^2}{\sigma_t^2})$ , throughout this paper. + +The inference process from a trained model involves a series of sequential calls, i.e. steps, of $\hat{\mathbf{z}}_{\theta}$ , starting from $\mathbf{z}_1 \sim \mathcal{N}(0, I)$ , where the quality of the generated image improves monotonically with the number of steps as shown in the qualitative examples of Fig .1 and quantitative results of Fig. 3. Several methods have been proposed to reduce the number of required steps at inference time [18, 19, 28]. Here, we use the widely used DDIM sampler [28], and yet see that the performance drops drastically with an extremely low number of steps. In the following, we introduce scale distillation to alleviate this shortcoming. + +# 3.2 Scale distillation + +The complexity of the image super-resolution task increases with the scale factor (SF). For example, a model trained for a lower SF ( $e.g. \times 2$ ) takes as input a less degraded image compared to a larger SF ( $e.g. \times 4$ ). Therefore, a diffusion model trained for $\times 2$ magnification should require fewer inference steps to solve the HR image generation task compared to a model trained for the x4 scale factor. + +To alleviate the training complexity for larger scale factors, we build on this observation and propose a progressive scale distillation training strategy. In particular, we start by training a teacher for a lower SF that takes a less degraded image as input. We then use its prediction as a target to train the model for a higher factor as a student. + +Let $N$ be the target SF of interest. Standard training involves making pairs of low and high-resolution images, where the low-resolution image is smaller than the HR image by a factor of $1 / N$ . The common approach for generating the training pairs is to gather a set of high-resolution images, perform synthetic degradation to obtain the corresponding low-resolution image and train a model that directly performs $\times N$ magnification [22, 32, 34] using eq. 2. Instead, we start by training a standard diffusion-based teacher for a lower SF, using a less degraded LR image, e.g. $2 / N$ , as input and use its prediction to train the student. + +More precisely, Let us assume $\hat{\mathbf{z}}_{\phi}, \hat{\mathbf{z}}_{\theta}$ be the teacher and student denoising models parameterized by $\phi, \theta$ respectively. To train the student for a factor of $N$ , we generate two degraded images for a given high-resolution image with factors $1/N, 2/N$ , with latent representations denoted by $\mathbf{z}_l, \mathbf{z}_l'$ respectively. That means $\mathbf{z}_l'$ is less degraded compared to $\mathbf{z}_l$ . Similar to the standard diffusion model training, we sample random noise at $t$ and add it to the high-resolution image to obtain $\mathbf{z}_t$ . The scale distillation loss will be: + +![](images/38a9bae124367f962eb6df6c7f926ca744746af1e5888a17a6c5ed30e9674a15.jpg) +Fig. 2: Training pipeline of proposed scale distillation. For a given HR image (e.g. size $512 \times 512$ ) shown in green, we generate two degraded versions with factors of $2 / N, 1 / N$ (e.g. sizes $256 \times 256$ and $128 \times 128$ ), shown in yellow and red respectively. Both degraded images are resized back via bicubic upsampling to $512 \times 512$ to be used as input to the encoder, which projects them to $4 \times 64 \times 64$ tensors. The less and more degraded LR image is used as input to the teacher and student respectively via concatenation with the noisy version of the HR image, i.e. $\mathbf{z}_t$ . The teacher's output is used as the target for training the student. Note that the teacher is first trained independently for a smaller magnification scale and then frozen during student training. + +$$ +\underset {\theta} {\operatorname {a r g m i n}} \mathbb {E} _ {\epsilon , t} [ \omega (\lambda_ {t}) | | \hat {\mathbf {z}} _ {\theta} (\mathbf {z} _ {t}, \mathbf {z} _ {l}, \lambda_ {t}) - \hat {\mathbf {z}} _ {\phi} (\mathbf {z} _ {t}, \mathbf {z} _ {l} ^ {\prime}, \lambda_ {t}) | | _ {2} ^ {2} ] \tag {3} +$$ + +where the teacher is trained for $N / 2$ magnification and frozen, and the student is initialized with the teacher's weights before the training. Note that we are using the latent diffusion framework that allows exactly the same architecture and input shapes for both the teacher and the student. Although the input low-resolution images for the student and teacher are of different sizes, they are both resized to a fixed size and fed to the encoder, which projects them to a tensor with a fixed size of $4 \times 64 \times 64$ . Fig. 2 illustrates the proposed scale distillation process. + +The idea of scale distillation is in line with that of progressive temporal distillation [26]. While a standard denoising model would only use the final image as the target irrespective of the sampled time step $t$ (see Eq. 2), both scale and progressive temporal distillation rely on the teacher to provide a supervisory signal specific for step $t$ (see Eq. 3). In this way, the supervisory signal is attuned to the specific denoising step, providing stable and consistent supervision at every denoising step. Fig. 3 provides empirical support for our hypothesis. We observe a significant gap between the distilled models from $\times 2$ to $\times 4$ and $\times 2$ to $\times 8$ compared to the models that are directly trained for $\times 4$ and $\times 8$ , respectively. + +![](images/51c5031199a3dfaae7a222b87d175103cd4bb5ac5c1ab0b85a00ec8965ca36d0.jpg) +×4 + +![](images/3c1f8b06bd14586d60a1964f2a841bcf04589e3cd52dc0b422c69e224ca2893e.jpg) +×8 +Fig. 3: FID vs. number of DDIM steps on the DIV2K validation set obtained through bicubic degradation using SD for $\times 4$ and $\times 8$ magnifications trained with scale distillation and standard training. We use $\times 2 \rightarrow \times 4$ scale distillation for $\times 4$ and $\times 2 \rightarrow \times 4 \rightarrow \times 8$ for $\times 8$ , and compare with the standard training directly for $\times 4$ and $\times 8$ respectively. All results are obtained using the original SD decoder. The model trained with scale distillation outperforms the standard training with large margin when using fewer steps for $\times 4$ . The gap between scale distillation and the standard training is significantly higher for small $\times 8$ and remains steady for large numbers steps. + +The gap is especially striking when evaluated with few inference steps and, as expected, shrinks as the number of steps increases and quality saturates. + +Similar to the temporal progressive distillation [26], the proposed scale distillation process can be applied iteratively with higher scale factors at each training step. The first student is initialized from scratch and trained on the raw data, similar to the standard training. Consequently, this student becomes the new teacher for training the next scale factor. In this paper, we consider three distillation steps up to the scale factor of $\times 8$ starting from $\times 2$ , i.e. $\times 2 \rightarrow \times 4 \rightarrow \times 8$ . As it is shown in Fig. 3, scale distillation is significantly more effective for $\times 8$ magnification where the LR image is of even lower quality, thereby reinforcing the importance of our proposed progressive scale training strategy. + +# 3.3 Decoder fine-tuning + +While scale distillation improves the one-step inference noticeably, there is still a gap between the one-step model and the saturated performance with a larger number of steps, see Fig. 3. To fill this gap, we propose to fine-tune the decoder on top of the frozen one-step diffusion model resulting from scale distillation. That is, after training the diffusion model, we freeze the U-Net, apply one DDIM step for a given LR image, and use it as input to fine-tune the decoder for the SR task. We use the original loss that has been used for training the autoencoder [22]. Importantly, this fine-tuning strategy with the U-Net in place is only possible with a diffusion model that can work properly with one step as enabled by our scale distillation approach; see Table. 3. We empirically show that the + +combination of our scale distillation approach with decoder fine-tuning yields a one-step model that can readily compete with models requiring a large number of inference steps. + +Implementation details. We use Stable diffusion v1.5 as our backbone and initialize our teacher with the text-to-image model. We use our own implementation of the v-parameterization with a cosine schedule. We use 4 A100 GPUs for all our experiments and train with a batch size of 60 with a gradient accumulation factor of 4. + +# 4 Experiments + +In this section, we evaluate our YONOS-SR against other methods targeting real image super-resolution at the standard $\times 4$ scale factor in Sec. 4.1 and demonstrate that our proposed scale distillation approach generalizes to higher scale factors of $\times 8$ in Sec. 4.2. We then provide qualitative results for $\times 4$ and $\times 8$ in Sec. 4.3. Finally, we perform ablation studies to highlight the role of our main contributions in Sec. 4.4. + +# 4.1 Evaluation on real image super resolution + +We begin by evaluating the performance of our proposed YONOS-SR model in the standard real image super-resolution setting targeting $\times 4$ scale factor. + +Datasets. Following previous work (e.g. [2,32,34,41]), we use DIV2K [1], DIV8K [7], Flickr2k [30], OST [36] and a subset of 10K images from FFHQ training set [13] to train our model. We adopt the Real-ESRGAN [34] degradation pipeline to generate synthetic LR-HR pairs. + +We then evaluate our model on both synthetic and real datasets. Similar to [32], we use 3K LR-HR (128 → 512) pairs synthesized from the DIV2K validation set using the Real-ESRGAN degradation pipeline as our synthetic dataset. We also report results on the standard DIV2K validation split with bicubic degradations for completeness. For the real dataset, we use $128 \times 128$ center crops from the RealSR [11], DRealSR [38] and DPED-iphone [10] datasets. + +Evaluation metrics. We evaluate using various perceptual and image quality metrics, including LPIPS [43], FID [9] (where applicable), as well as the no-reference image quality metric, MUSIQ [14]. For the synthetic datasets, we also report standard PSNR and SSIM metrics, for reference. + +Baselines. As the main contribution of our paper targets improving the inference process of diffusion-based super-resolution, our main points of comparison are diffusion-based SR models, including the recent StableSR model [32], ReshShift [40], and the original LDM model [22]. For completeness, we also include comparison to other non-diffusion-based baselines, including; RealSR [11], BSRGAN [41], RealESRGAN [34], DASR [16] and FeMaSR [2]. + +
DatasetsMetricsRealSRBSRGANDASRReal-ESRGAN +FeMaSRLDMResShiftStableSRYONOS (ours)
DIV2K Valid RealESRGAN degradationsFID ↓49.4944.2249.1637.6435.8726.4730.4524.4421.86
LPIPS ↓0.52760.33510.35430.31120.31990.25100.30760.31140.2310
PSNR ↑24.6224.5824.4724.2823.0623.3224.6223.2624.74
SSIM ↑0.59700.62690.63040.63720.58870.57620.62100.57260.6428
MUSIQ ↑28.5761.1955.1961.0560.8362.2763.5865.9270.30
DIV2K Valid bicubic degradationsLPIPS ↓-0.23640.16960.2284-0.23230.17750.25800.1703
PSNR ↑-27.3228.5526.65-25.4927.2421.9026.26
RealSRLPIPS ↓0.35700.26560.31340.27090.29370.31590.32790.30020.2479
MUSIQ ↑38.2663.2841.2160.3659.0658.9059.8765.8869.21
DRealSRLPIPS ↓0.39380.28580.30990.28180.31570.33790.38700.32840.2721
MUSIQ ↑26.9357.1642.4154.2653.7153.7254.1358.5166.26
DPED-iphoneMUSIQ ↑45.6045.8932.6842.4249.9544.2338.5950.4859.45
-# STEPS ↓-----20042001
+ +Table 1: Comparison to baselines. Results in Red and Blue correspond to best and second best results, resp. Cells with - indicate that there were no previously reported results using the considered baseline and corresponding metric. + +Results. Results summarized in Tab. 1 show that YONOS-SR outperforms all other diffusion-based SR methods, while using only one inference step, whereas other alternatives use 200 inference steps. These results highlight the efficiency of YONOS-SR in reducing the number of steps to one without compromising performance but indeed improving it further. Also, our model outperforms all considered baselines in 5 out of 7 metrics on the synthetic data and all comparison points on the real datasets. + +# 4.2 Generalization to higher scale factors + +We now evaluate the generalization capability of our proposed scale distillation approach. To this end, we train our YONOS-SR model with one more iteration of scale distillation, thereby going from a model capable of handling $\times 4$ magnifications to $\times 8$ magnifications. We then fine-tune the decoder on top of the one-step $\times 8$ diffusion model. To evaluate this model, we follow recent work [3], and evaluate on the same subset of ImageNet and FFHQ for $\times 8$ magnification, i.e. $64 \times 64 \rightarrow 512 \times 512$ . In particular, we select the same 1k subset of ImageNet test set by first ordering the 10k images by name and then selecting the 1k subset via interleaved sampling, i.e. using images of index 0, 10, 20, etc. To obtain the LR-HR pairs, we use $\times 8$ average pooling degradations. In the case of FFHQ, we use the first 1k images of the validation set. We also evaluate using the same metrics and baselines reported in this recent work [3]. + +The results summarized in Tab. 2 demonstrate that our proposed one-step method generalizes well to higher scale factors, where it is able to achieve good results in terms of FID and LPIPS scores, which are known to better align with human observation, especially at higher magnification factors [24]. Notably, unlike baselines, our model has not been trained on ImageNet data. We use only $10\mathrm{k}$ images of FFHQ in our training set. + +# 4.3 Qualitative evaluation + +In addition to extensive quantitative evaluations, we qualitatively compare one-step YONOS-SR with 200-step StableSR and standard diffusion-based SR (SD- + +![](images/c771e3ae9778fc241b9b90ee0fee4a35e24bd82df655e6a0419a959874d1b029.jpg) +(a) + +![](images/908bb75aaa052e5444c7a7d6f4693f968c94ac6f805741c821bf75acd5fdb5fb.jpg) +(b) + +![](images/2c167900d29cb02d87af2202ec7c2be66e3c7961e6f56ec644423dccb60b58f0.jpg) +(c) + +![](images/9aa69fe72f9a8d70bebe6b5bb9b9f3aff5336855020634833dca2ceedc0d87ee.jpg) +(d) + +![](images/a8cc9a86546513622047ec53b04d2ac89b96a36edd9fe6b1fab1a2d5eade7f05.jpg) +(a) + +![](images/67e1ef51f8bf256f3f80987f81668d90462a5ea3be86686fbc5dab64216b99ed.jpg) +(b) + +![](images/acdf924761d980a659c116994952304cf7fa3f2974ab5e37c54ae4460be1a618.jpg) +(c) + +![](images/3e27db1e4bee637ca789fa88e1c4dd09ec029857d4e5a7c777d54ce579395533.jpg) +(d) + +![](images/0c5cba40bd1c90df8aaff15ce11537aa80574905614ae618298e4a0d91bf988d.jpg) +(a) + +![](images/c61e86ace96644ff4ed22f4c84c64aeaed8093e6b634898e407cec2c5d7c38e3.jpg) +(b) + +![](images/6b75b67edbfd4d38a09947eed4abbceab0f002956fdd1e669d0034f232669cd3.jpg) +(c) + +![](images/76b209a0f120a6784bc8dcb9da09754e330bac7d1b4801e55b8974cb0b3efa99.jpg) +(d) +Fig. 4: Qualitative comparison on the validation set of DIV2K dataset: (a) 200-step StableSR (b) 200-step standard SD-SR (c) 1-step YONOS(ours) (d) the ground truth. SD-SR represents the standard Stable Diffusion-based SR model. 200-step StableSR and SD-SR tend to over-sharpen, adding artifacts that do not match the ground truth content. Our SR images match the most with the corresponding ground truth image; see the faces, Pepsi, and crocodile textures in the first, second, and third rows, respectively. The images are best seen in a display and zoomed in. + +SR) in Fig. 4. Our method generates the closest SR images to the ground truth in terms of detailed textures while taking only 1-step during the inference. These observations are in line with the numerical superiority of our method in the quantitative evaluations above. + +As it is clearly demonstrated in Fig. 3, scale distillation is even more effective for $\times 8$ compared to $\times 4$ magnification. As a qualitative support, we compare the model trained directly for $\times 8$ magnification without scale distillation to our model trained with three iterations of scale distillation $\times 2\rightarrow \times 4\rightarrow \times 8$ in Fig. 5. Again, we use the validation set of DIV2K dataset. In line with the numerical analyses in Fig. 3, we observe that the model trained with scale distillation out- + +![](images/1593eaecf963deb3f2ca889ae6bd00e11a858ceb3d06d5fd5f0a9f652b65d7bf.jpg) +(LR) + +![](images/4bd1fc4401795e4edfcd4e705006a0a6a967c3b794a42e0ae62c96c756c39459.jpg) +8 8 +(64 steps) + +![](images/c67e480cb8323b088bb764ad4f7ee50accb7984e147e4253ae94c0eedb925271.jpg) +(4 steps) + +![](images/c83f145995c8d7e7db504f1d8c9d855cf4f0c13475a5d877874301187ca874c9.jpg) +(1 step) + +![](images/98e449cae1f0a5894b1a17e3f8e6654bc6a5976f197a660dba5ab10a955c4ba2.jpg) +(HR) + +![](images/60dafed7a0530187c0f075172d34e87e7c33c274bd6e0e29cc367efd02ec18d9.jpg) +eannnnnne +(64 steps) + +![](images/48f6340e5ad851e9eb417d20b808938e21affb4f3c7a8030500edd54a68286a5.jpg) +(4 steps) + +![](images/18fbc3a4f44ed175fc74a49abbb31236e4fe684ce6761085383fedce2ea75791.jpg) +(1 step) + +![](images/17d41dc09b3261d9601d38f6881aa572f1c1b2862d5cf3e68ae6ec28d6589c15.jpg) +(LR) +aee + +![](images/6dd96e572e8410ae31f707ec7f7d895970dbc3b6ef1fd587093e982f66422b73.jpg) +(64 steps) + +![](images/471d2a50f6ca06a1626076b9868d9041ab7b9490b3d477bc2b60ea97028454bc.jpg) +(4 steps) + +![](images/b3e30bf510e75e692415248f6a526f56d5a9d347630a644c665c472c78ab6f77.jpg) +(1 step) + +![](images/73b71fd4760d74e9535b5c910440e83be3f6733c46e453fbf084325f412cb714.jpg) +(HR) + +![](images/1b523942e03785179fa398d3e42a6a78ee1dbbc3616de7d420c03e66ff55d182.jpg) +Scale distillation $\times 2\uparrow \uparrow \times 4\times 8$ +(64 steps) +Fig. 5: Qualitative comparison on the validation set of DIV2K dataset for $\times 8$ magnification when the model is trained directly for $\times 8$ magnification without scale distillation (top row) and with three iterations of scale distillation $\times 2\rightarrow \times 4\rightarrow \times 8$ (bottom row). We show the input LR image results with 1, 4, and 64 steps using the original decoder and the corresponding HR image for both models. The model trained with scale distillation outperforms the standard training with high margins. Specifically, due to poor LR input, the standard training fails to recover the relevant content. The images are best seen in a display and zoomed in. + +![](images/74a5aa44e0bd85e98a0dcacdf7383c49daeb3227549af0f5b966c487b1d24a94.jpg) +(4 steps) + +![](images/2ce0dcc22a0d4bcf0b451c9b644357d74b96bb470c8d8cb445e7c90d66ac3ff1.jpg) +(1 step) + +
ImagenetFFHQ
FID ↓LPIPS ↓PSNR ↑FID ↓LPIPS ↓PSNR ↑
LDPS61.090.47523.2136.810.29228.78
GML-DPS [23]60.360.45623.2141.650.31828.50
PSLD [23]60.810.47123.1736.930.33526.62
LDIR [8]63.460.48022.2336.040.34525.79
P2L [3]51.810.38623.3831.230.29028.55
YONOS (ours)34.590.24122.8021.410.16126.08
+ +Table 2: Comparison to baselines on ImageNet subset with x8 magnification factor. The results for other methods are taken from [3]. + +performs the standard training in terms of recovering the corresponding content and details. Note that, the problem of $\times 8$ magnification is of significantly higher complexity compared to $\times 4$ due to poor LR input. Notable for these $\times 8$ qualitative evaluations we use the original decoder (i.e. these results are obtained before the decoder finetuning stage) to emphasize the impact of scale distillation. + +# 4.4 Ablation study + +We now study the impact of the various components introduced in our work. To this end, we use the standard DIV2K validation set with $\times 4$ low-resolution images obtained through bicubic degradation [1]. We use the FID metric as it is a standard metric for assessing the quality of generative models. Our initial investigation also revealed that FID correlates the most with the human evaluation of the generated images. The validation set of the DIV2K dataset includes only 100 samples. To obtain more reliable FID scores, we extract 30 random $128 \times 128$ patches and their corresponding $512 \times 512$ HR counterparts from each image in the standard DIV2K bicubic validation set, resulting in a total of 3k LR-HR pairs. For completeness, we also report LPIPS, PSNR, and SSIM scores. + +Impact of scale distillation. We begin by evaluating the impact of our proposed scale distillation on speeding up inference time. To this end, we run two stable diffusion (SD) models trained for $\times 4$ super-resolution (SR), with various numbers of inference steps. The first model is a standard SD super-resolution model trained directly for target $\times 4$ super-resolution (i.e. SD-SR), while the second model is trained with our proposed scale distillation from $\times 2$ magnification to $\times 4$ . We use the same model, training set, and degradation pipeline in training both models. The only difference is the use of our scale distillation in the later model. Specifically, we start with training a teacher for $\times 2$ magnification using raw data as a denoising target. We use the $\times 2$ model as a frozen teacher and use its prediction to train a student for $\times 4$ magnification. The results summarized in Fig. 3 speaks decisively in favor of our scale distillation approach. We can see that the model trained with the proposed scale distillation performs significantly better than direct $\times 4$ training when using only one step. + +Scale distillation outperforms the standard training more significantly for $\times 8$ magnification where we perform three training iterations for scale distillation, i.e. $\times 2 \rightarrow \times 4 \rightarrow \times 8$ . One reason for the larger gap for $\times 8$ magnification is that the SR task is more ambiguous for $\times 8$ magnification due to lower quality input. + +As a result, the model benefits more from the more simplified supervisory signal obtained from scale distillation. Note that we use the original SD decoder (i.e. no decoder finetuning) for this experiment to analyze the impact of the scale distillation independently of decoder fine-tuning. + +Impact of decoder fine-tuning. One of the direct consequences of having a diffusion model that can yield good results in one denoising step is that it allows for decoder fine-tuning with the U-Net in place, as it will directly give a good starting point to the decoder. To validate the importance of the input given to the decoder prior to fine-tuning and, thereby, the importance of YONOS-SR, we experiment with the standard SD-SR model and our scale distillation model. In both cases, we freeze the U-Net and only allow the + +models to do 1 denoising step. We then feed their output to the decoder and fine-tune it following the same loss used in the original stable diffusion model [22]. + +The results summarized in Tab. 3 validate the importance of having a good initial input from the diffusion model prior to decoder fine-tuning. The left chunk shows that the model trained with scale distillation outperforms the standard training with a good margin when using the original decoder, indicating that the scale distillation results in a U-Net that provides a higher quality input for the decoder. Moreover, as we can see in the right chunk of Tab. 3, fine-tuning the decoder on top of both 1-step models improves the performance. However, the model with scale distillation yields significantly better results than the standard SD-SR directly trained for the target magnification. Once again, the impact of scale distillation is more sensible for $\times 8$ magnification than $\times 4$ , which highlights the importance of our approach in such difficult settings. Importantly, this fine-tuning strategy is not computationally feasible with diffusion models that require many denoising steps to give a reasonable starting point for the decoder. + +
DecoderOriginalFine-tuned
Scale distillationXX
FID ↓27.9323.9616.2615.54
LPIPS ↓0.2270.2070.1630.159
PSNR ↑25.9426.2425.7326.30
SSIM ↑0.7110.7140.7130.727
FID ↓102.9266.9041.5428.47
LPIPS ↓0.5410.4030.3050.243
PSNR ↑21.0824.4621.5323.96
SSIM ↑0.5410.6470.5280.632
+ +Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step. + +# 5 Conclusion + +In summary, in this paper, we introduced the first fast stable diffusion-based super-resolution method. To achieve this, we introduced scale distillation, an approach that allows us to tackle the SR problem in as little as one step. Having a fast diffusion model allowed us to directly fine-tune the decoder, which we show yields state-of-the-art results, even at high magnification factors and only using a single step. We hope that the proposed distillation approach could be adapted for other inverse imaging problems (e.g. image inpainting), which we believe is an interesting direction for future research. + +Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step. + +# References + +1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017) +2. Chen, C., Shi, X., Qin, Y., Li, X., Han, X., Yang, T., Guo, S.: Real-world blind super-resolution via feature matching with implicit high-resolution priors. In: ACM International Conference on Multimedia (2022) +3. Chung, H., Ye, J.C., Milanfar, P., Delbracio, M.: Prompt-tuning latent diffusion models for inverse problems. In: arXiv preprint arXiv: 2310.01110 (2023) +4. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision (2014) +5. Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world superresolution. In: IEEE International Conference on Computer Vision - Workshops (2019) +6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances on Neural Information Processing Systems (2014) +7. Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., Timofte, R.: Div8k: Diverse 8k resolution image dataset. In: IEEE International Conference on Computer Vision - Workshops (2019) +8. He, L., Yan, H., Luo, M., Luo, K., Wang, W., Du, W., Chen, H., Yang, H., Zhang, Y.: Iterative reconstruction based on latent diffusion model for sparse data reconstruction. In: arXiv preprint arXiv:2307.12070 (2023) +9. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances on Neural Information Processing Systems (2017) +0. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Gool, L.V.: Dslr-quality photos on mobile devices with deep convolutional networks. In: IEEE International Conference on Computer Vision (2017) +1. Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., Huang, F.: Real-world super-resolution via kernel estimation and noise injection. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2020) +2. Jolicoeur-Martineau, A., Li, K., Piché-Taillefer, R., Kachman, T., Mitliagkas, I.: Gotta go fast when generating data with score-based models. In: arXiv preprint arXiv:2105.14080 (2021) +3. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2019) +4. Ke, J., Wang, Q., Wang, Y., Milanfar, P., Yan, F.: Musiq: Multi-scale image quality transformer. In: IEEE International Conference on Computer Vision (2021) +5. Liang, J., Zhang, K., Gu, S., Van Gool, L., Timofte, R.: Flow-based kernel prior with application to blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021) +6. Liang, J., Zeng, H., Zhang, L.: Efficient and degradation-adaptive network for real-world image super-resolution. In: European Conference on Computer Vision (2022) +7. Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image superresolution: A survey and beyond. In: arXiv preprint arXiv:2107.03055 (2021) + +18. Lu, C., Zhou, Y., Bao, F., Chen, J., LI, C., Zhu, J.: Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In: Advances on Neural Information Processing Systems (2022) +19. Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., Zhu, J.: Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. In: arxiv prepring arxiv: 2211.01095 (2023) +20. Maeda, S.: Unpaired image super-resolution using pseudo-supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (2020) +21. Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J., Salimans, T.: On distillation of guided diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2023) +22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2022) +23. Rout, L., Raoof, N., Daras, G., Caramanis, C., and Sanjay Shakkottai, A.G.D.: Solving linear inverse problems provably via posterior sampling with latent diffusion models. In: NeurIPS (2023) +24. Sahak, H., Watson, D., Sahara, C., Fleet, D.: Denoising diffusion probabilistic models for robust image super-resolution in the wild. In: arXiv preprint arXiv: 2302.07864 (2023) +25. Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D.J., Norouzi, M.: Image superresolution via iterative refinement. preprint arXiv: 2104.07636 (2021) +26. Salimans, T., Ho, J.: Progressive distillation for fast sampling of diffusion models. In: International Conference on Learning Representations (2022) +27. Shocher, A., Cohen, N., Irani, M.: "zero-shot" superresolution using deep internal learning. In: IEEE Conference on Computer Vision and Pattern Recognition (2018) +28. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021) +29. Song, Y., Dhariwal, P., Chen, M., Sutskever, I.: Consistency models. arXiv preprint arXiv:2303.01469 (2023) +30. Timofte, R., Agustsson, E., Gool, L.V., Yang, M., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017) +31. Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., Wen, F.: Bringing old photos back to life. In: IEEE Conference on Computer Vision and Pattern Recognition (2020) +32. Wang, J., Yue, Z., Zhou, S., Chan, K.C., Loy, C.C.: Exploiting diffusion prior for real-world image super-resolution. In: arXiv preprint arXiv:2305.07015 (2023) +33. Wang, L., Wang, Y., Dong, X., Xu, Q., Yang, J., An, W., Guo, Y.: Unsupervised degradation representation learning for blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021) +34. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In: IEEE International Conference on Computer Vision - Workshops (2021) +35. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018) +36. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018) + +37. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: ESRGAN: Enhanced super-resolution generative adversarial networks. In: European Conference on Computer Vision - Workshops (2018) +38. Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., Lin, L.: Component divide-and-conquer for real-world image super-resolution. In: European Conference on Computer Vision (2020) +39. Yan, Y., Liu, C., Chen, C., Sun, X., Jin, L., Peng, X., Zhou, X.: Fine-grained attention and feature-sharing generative adversarial networks for single image superresolution. In: IEEE Transactions on Multimedia (2021) +40. Yue, Z., Wang, J., Change Loy, C.: Ressift: Efficient diffusion model for image super-resolution by residual shifting. In: NeurIPS (2023) +41. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. In: IEEE International Conference on Computer Vision (2021) +42. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: IEEE International Conference on Computer Vision (2023) +43. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (2018) +44. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (2017) \ No newline at end of file diff --git a/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/images.zip b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..de562e0e94a7847e9d9e5d0a3fdac93c40ec2fc9 --- /dev/null +++ b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b9fea7c8142b62f4e1da20b5b3ba21f480784c2ae131dfe18c45a9f8f1920a0 +size 637655 diff --git a/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/layout.json b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2f3984058675d7910a691cde6325d6cfeed9ad2b --- /dev/null +++ b/2024/You Only Need One Step_ Fast Super-Resolution with Stable Diffusion via Scale Distillation/layout.json @@ -0,0 +1,10479 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 137, + 111, + 477, + 147 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 111, + 477, + 147 + ], + "spans": [ + { + "bbox": [ + 137, + 111, + 477, + 147 + ], + "type": "text", + "content": "You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 147, + 169, + 466, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 169, + 466, + 193 + ], + "spans": [ + { + "bbox": [ + 147, + 169, + 466, + 193 + ], + "type": "text", + "content": "Mehdi Noroozi, Isma Hadji, Brais Martinez, Adrian Bulat, and Georgios Tzimiropoulos" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 226, + 202, + 388, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 226, + 202, + 388, + 224 + ], + "spans": [ + { + "bbox": [ + 226, + 202, + 388, + 224 + ], + "type": "text", + "content": "Samsung AI Cambridge {m.noroozi,isma.hadji}@samsung.com" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 160, + 258, + 452, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 258, + 452, + 488 + ], + "spans": [ + { + "bbox": [ + 160, + 258, + 452, + 488 + ], + "type": "text", + "content": "Abstract. In this paper, we introduce YONOS-SR, a novel stable diffusion based approach for image super-resolution that yields state-of-the-art results using only a single DDIM step. Specifically, we propose a novel scale distillation approach to train our SR model. Instead of directly training our SR model on the scale factor of interest, we start by training a teacher model on a smaller magnification scale, thereby making the SR problem simpler for the teacher. We then train a student model for a higher magnification scale, using the predictions of the teacher as a target during the training. This process is repeated iteratively until we reach the target scale factor of the final model. The rationale behind our scale distillation is that the teacher aids the student diffusion model training by i) providing a target adapted to the current noise level rather than using the same target coming from ground truth data for all noise levels and ii) providing an accurate target as the teacher has a simpler task to solve. We empirically show that the distilled model significantly outperforms the model trained for high scales directly, especially with few steps during inference. Having a strong diffusion model that requires only one step allows us to freeze the U-Net and fine-tune the decoder on top of it. We show that the combination of spatially distilled U-Net and fine-tuned decoder outperforms state-of-the-art methods requiring 200 steps with only one single step." + }, + { + "bbox": [ + 160, + 258, + 452, + 488 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 510, + 230, + 523 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 510, + 230, + 523 + ], + "spans": [ + { + "bbox": [ + 133, + 510, + 230, + 523 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 538, + 482, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 538, + 482, + 634 + ], + "spans": [ + { + "bbox": [ + 130, + 538, + 482, + 634 + ], + "type": "text", + "content": "Diffusion models have shown impressive performance in various image generation tasks [22, 42], including image super-resolution (SR) [3, 24, 25, 32]. However, the large number of sequential denoising passes required by the sampling strategy results in extreme computational cost, even for stable diffusion-based models (SD) that operate in the latent space of an autoencoder. Recently, several approaches have been proposed to reduce the number of sampling steps [18, 26, 28, 29]. Unfortunately, such approaches usually compromise performance, especially for the lower number of steps." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 642, + 481, + 665 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 642, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 642, + 481, + 665 + ], + "type": "text", + "content": "1 The code will be available here once all approvals are processed: https://github.com/SamsungLabs/yonos" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 140, + 468, + 574 + ], + "blocks": [ + { + "bbox": [ + 133, + 140, + 468, + 574 + ], + "lines": [ + { + "bbox": [ + 133, + 140, + 468, + 574 + ], + "spans": [ + { + "bbox": [ + 133, + 140, + 468, + 574 + ], + "type": "image", + "image_path": "e0877620c059e60467c9cf464af9e74fd37a5664c311d4207ed7b9f81f8b29cd.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "lines": [ + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "text", + "content": "Fig. 1: Qualitative comparison for " + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "text", + "content": " magnifications. Each column shows top to bottom LR input image, 1 and 200 step SD-SR, 1-step YONOS-SR(ours). SD-SR represents the standard Stable Diffusion-based SR model. The 1-step SD-SR method lacks quality in terms of detailed textures compared to 200-steps of the same model; see building texture in the first column and hairs in the middle column. In contrast, our method outperforms 200-steps SD-SR with only one step, especially for " + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 575, + 482, + 665 + ], + "type": "text", + "content": " magnification where SD-SR fails to recover the details even with 200 steps. Samples are taken from DIV2K validation set. Images are best seen in a display and zoomed in." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "content": "Typically, diffusion-based models yield the best results on image patches of similar sizes to those seen during training (e.g. " + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "inline_equation", + "content": "64 \\times 64" + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "content": " for SD [22]). On the other hand, super-resolution applications require operating in high-resolution settings, drastically exacerbating the computational issues of diffusion-based models. For example, a SR model that aims for a magnification of " + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "content": " going from " + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "content": " requires dividing the input image into 16 patches of " + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "inline_equation", + "content": "64 \\times 64" + }, + { + "bbox": [ + 130, + 116, + 482, + 259 + ], + "type": "text", + "content": " and running the model on each patch individually, making a large number of steps prohibitive for realistic use cases. Using state-of-the-art step-reduction strategy, such as more efficient samplers [18, 19, 28] can partially alleviate this issue but still falls widely short of practical needs. For example, going down to the target of 1 DDIM step results in a significant drop in performance compared to a typical model that does 200 inference steps, as shown in Fig. 1." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 262, + 482, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 262, + 482, + 418 + ], + "spans": [ + { + "bbox": [ + 130, + 262, + 482, + 418 + ], + "type": "text", + "content": "One differentiating characteristic of the super-resolution task is that it is conditioned on the low-resolution (LR) input image to yield the target high-resolution (HR) image. Unlike the task of text-to-image generation, which relies on text conditioning, the LR image provides closer content to the target HR image, especially at lower scale factors. Therefore, conditioning the diffusion model on the LR image at low-scale factors makes the task inherently simpler for the diffusion model. In this paper, we take advantage of this peculiarity and introduce a novel training strategy dubbed scale distillation. While typical diffusion-based SR methods train the model for super-resolution by conditioning directly on the LR image at the target scale factor, we instead propose a progressive training approach, where we start by training a model for lower scale factors (i.e. where the conditioning signal is closer to the target) and progressively increase to the target scale factor using the previously trained model as a teacher." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 421, + 482, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 421, + 482, + 552 + ], + "spans": [ + { + "bbox": [ + 130, + 421, + 482, + 552 + ], + "type": "text", + "content": "More specifically, instead of using the raw data to train a model for large scale factors, scale distillation obtains a rich and accurate supervisory signal from a teacher trained for a smaller scale factor. We first train a teacher that takes a less degraded image as input and, therefore, has an easier task to solve during training. Then, we train a model for a larger scale factor as a student while initializing it with the same weights as the teacher, which is now frozen. For a given time step during the training, we feed both teacher and student with the same noisy version of the HR image. However, we condition the teacher with the less degraded LR image (i.e. using the same scale that was used during teacher training), while we condition the student on the target (more degraded) LR image. We then use the teacher's prediction as a target to train the student." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 555, + 482, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 555, + 482, + 615 + ], + "spans": [ + { + "bbox": [ + 130, + 555, + 482, + 615 + ], + "type": "text", + "content": "This training strategy has two direct advantages: i) Unlike typical training where the supervisory signal is somewhat ambiguous as the target is the same for all noise levels, our student receives its target from the teacher and is therefore adaptive to the noise level. ii) The target is more accurate, especially in terms of the finer detail, because the teacher takes a less degraded LR image as input." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "content": "The proposed scale distillation approach allows the model to solve the SR task in fewer steps as we have simplified the task for the student. In fact, we show that models trained with our approach improve significantly when a few steps are used during the inference, e.g. one step, see Fig. 3. Therefore, a direct" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "content": "advantage of the proposed approach is that fine-tuning the decoder directly on top of the diffusion model becomes computationally tractable due to the single inference step required. Taking advantage of this fine-tuning, we show that You Only Need One Step (YONOS)-SR outperforms state-of-the-art diffusion-based SR methods that require a large number (e.g. 200) of inference steps." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 176, + 480, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 176, + 480, + 260 + ], + "spans": [ + { + "bbox": [ + 130, + 176, + 480, + 260 + ], + "type": "text", + "content": "In summary, our contributions are threefold: I) We introduce scale distillation to train SD models with a more accurate and fine supervisory signal for image super-resolution tasks. II) We show that our proposed scale distillation strategy yields more efficient SD models that allow for directly fine-tuning the decoder on top of a frozen one-step diffusion model. III) We show that combining scale distillation followed by decoder fine-tuning yields state-of-the-art results on the SR task, even at high magnification factors, while requiring only one step." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 280, + 234, + 293 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 280, + 234, + 293 + ], + "spans": [ + { + "bbox": [ + 132, + 280, + 234, + 293 + ], + "type": "text", + "content": "2 Related work" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 306, + 482, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 306, + 482, + 486 + ], + "spans": [ + { + "bbox": [ + 130, + 306, + 482, + 486 + ], + "type": "text", + "content": "Real image super-resolution. Image super-resolution entails restoring a High Resolution (HR) image given its Low Resolution (LR) observation. Solving this task for real images is especially challenging given the dramatic differences in real-world image distributions [10, 11, 17, 38]. These differences arise from varying image degradation processes, different imaging devices, and image signal processing methods, all of which are difficult to properly model and generalize. For this reason, real image super-resolution (or blind super-resolution) has received significant interest among the research community [11, 16, 32-34, 37, 38, 41]. While some methods attempt to learn the degradation process [5, 20, 31, 39], their success remains limited due to the lack of proper large scale training data [17], even while using some unsupervised methods [44]. In contrast, more popular approaches tackle the problem by explicitly modeling the degradation pipeline to create synthetic LR-HR pairs to use for training [15, 27, 34, 41]. Given, the wider success of the explicit degradation modeling approach, we elect to rely on the widely used RealESRGAN degradation pipeline [34] in training our model." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 498, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 498, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 498, + 482, + 666 + ], + "type": "text", + "content": "Diffusion-based super-resolution. Since the early SRCNN [4] method, many deep learning-based solutions for blind super-resolution have been proposed [2, 11, 22, 24, 25, 34, 37, 41, 44]. Early work took advantage of this space by using semantic segmentation probability maps for guiding SR [35]. Most recent methods aim at taking advantage of learned generative priors to simplify the inverse imaging problem of blind image super-resolution. Usually, methods following this paradigm [34, 37, 41] rely on GANs [6]. More recently, diffusion models showed remarkable generative capabilities yielding impressive results across a range of applications [22, 42]. As such, in this paper, we follow several recent works [22, 24, 25, 32] and rely on diffusion-based generative models to tackle the super-resolution problem. While diffusion-based models achieve impressive results, their main shortcoming is the long inference time. Diffusion-based models require several inference steps through the model to yield a final output, thereby limiting their practical use. Therefore, in this paper, we tackle the important" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 236, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 236, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 236, + 100 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 450, + 128 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 450, + 128 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 450, + 128 + ], + "type": "text", + "content": "problem of speeding up the inference of diffusion-based super-resolution." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 271 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 271 + ], + "type": "text", + "content": "Guided distillation. Recognizing the inference speed shortcoming of diffusion models, several works have been proposed recently to address this issue [18, 19, 21, 26, 28]. These methods can be categorized into two main tacks. One approach tackles this problem at inference time by either proposing more efficient samplers [12, 28] or relying on higher-order solvers [18, 19]. More closely related to ours are methods that aim at directly training a diffusion model that can solve the generative problem at hand in fewer steps through temporal distillation [21, 26, 29]. Our method tackles the problem at training time as well but we propose scale distillation. Our main idea is to reduce the inference speed by progressively making the generative problem easier during training. Notably, our approach is orthogonal to temporal distillation and can be used in tandem with it." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 291, + 226, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 291, + 226, + 304 + ], + "spans": [ + { + "bbox": [ + 132, + 291, + 226, + 304 + ], + "type": "text", + "content": "3 YONOS-SR" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 317, + 482, + 401 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 317, + 482, + 401 + ], + "spans": [ + { + "bbox": [ + 130, + 317, + 482, + 401 + ], + "type": "text", + "content": "In this section, we describe YONOS-SR, our diffusion-based model for image super-resolution. First, we present an overview of the image super-resolution framework with the latent diffusion models in Sec. 3.1. We then discuss our proposed scale distillation method that allows us to improve the performance with fewer sampling steps, e.g. 1-step, in Sec. 3.2. Finally, in Sec. 3.3, we discuss how the 1-step diffusion model allows for fine-tuning a decoder directly on top of the diffusion model, with a frozen U-Net." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 420, + 388, + 432 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 420, + 388, + 432 + ], + "spans": [ + { + "bbox": [ + 132, + 420, + 388, + 432 + ], + "type": "text", + "content": "3.1 Super-resolution with latent diffusion models" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "spans": [ + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": "Given a training set in the form of pairs of low and high-resolution images " + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "inline_equation", + "content": "(\\mathbf{x}_h,\\mathbf{x}_l)\\sim p(\\mathbf{x}_h,\\mathbf{x}_l)" + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": ", the task of image super-resolution involves estimating the probability distribution of " + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "inline_equation", + "content": "p(\\mathbf{x}_h|\\mathbf{x}_l)" + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": ". The stable diffusion framework uses a probabilistic diffusion model applied on the latent space of a pre-trained and frozen autoendoer. Let us assume that " + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_h = \\mathcal{E}(\\mathbf{x}_h),\\mathbf{z}_l = \\mathcal{E}(\\mathbf{x}_l)" + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": " be the corresponding projection of a given low and high-resolution images " + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "inline_equation", + "content": "(\\mathbf{x}_h,\\mathbf{x}_l)" + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "inline_equation", + "content": "\\mathcal{E}" + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": " is the pre-trained encoder. The forward process of the diffusion model, " + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "inline_equation", + "content": "q(\\mathbf{z}|\\mathbf{z}_h)" + }, + { + "bbox": [ + 130, + 441, + 482, + 536 + ], + "type": "text", + "content": " is a Markovian Gaussian process defined as" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 207, + 548, + 480, + 560 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 207, + 548, + 480, + 560 + ], + "spans": [ + { + "bbox": [ + 207, + 548, + 480, + 560 + ], + "type": "interline_equation", + "content": "q \\left(\\mathbf {z} _ {t} \\mid \\mathbf {z} _ {h}\\right) = \\mathcal {N} \\left(\\mathbf {z} _ {t}; \\alpha_ {t} \\mathbf {z} _ {h}, \\sigma_ {t} \\mathbf {I}\\right), \\quad \\mathbf {z} = \\left\\{\\mathbf {z} _ {t} \\mid t \\in [ 0, 1 ] \\right\\} \\tag {1}", + "image_path": "625759968a0253965571e81c64ad678d1550ddc05ae31256e3e99443cf5f98a9.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "spans": [ + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "inline_equation", + "content": "\\mathbf{z}" + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "content": " denotes the latent variable of the diffusion model and " + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "inline_equation", + "content": "\\alpha_{t},\\sigma_{t}" + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "content": " define the noise schedule such that the log signal-to-noise ratio, " + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "inline_equation", + "content": "\\lambda_t = \\log [\\alpha_t^2 /\\sigma_t^2 ]" + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "content": " , decreases with " + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "content": " monotonically. During training, the model learns to reverse this diffusion process progressively, i.e. estimate " + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "inline_equation", + "content": "p(\\mathbf{z}_{t - 1}|\\mathbf{z}_t)" + }, + { + "bbox": [ + 130, + 569, + 481, + 617 + ], + "type": "text", + "content": " , to generate new data from noise." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "text", + "content": "The super-resolution objective function is derived by maximizing a variational lower bound of the data log-likelihood of " + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "inline_equation", + "content": "p(\\mathbf{z}_h|\\mathbf{z}_l)" + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "text", + "content": " via approximating the backward denoising process of " + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "inline_equation", + "content": "p(\\mathbf{z}_h|\\mathbf{z}_t,\\mathbf{z}_l)" + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "text", + "content": ". Note that, for super-resolution, the denoising process is conditioned on the low-resolution input, " + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_l" + }, + { + "bbox": [ + 130, + 618, + 482, + 666 + ], + "type": "text", + "content": ", as well. This can" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 140 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 140 + ], + "type": "text", + "content": "be estimated by the function " + }, + { + "bbox": [ + 130, + 116, + 482, + 140 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{z}}_{\\theta}(\\mathbf{z}_t,\\mathbf{z}_l,\\lambda_t)" + }, + { + "bbox": [ + 130, + 116, + 482, + 140 + ], + "type": "text", + "content": " parametrized by a neural network. We can train this function via a weighted mean square error loss." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 220, + 162, + 481, + 181 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 220, + 162, + 481, + 181 + ], + "spans": [ + { + "bbox": [ + 220, + 162, + 481, + 181 + ], + "type": "interline_equation", + "content": "\\underset {\\theta} {\\operatorname {a r g m i n}} \\mathbb {E} _ {\\epsilon , t} [ \\omega (\\lambda_ {t}) | | \\hat {\\mathbf {z}} _ {\\theta} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l}, \\lambda_ {t}) - \\mathbf {z} _ {h} | | _ {2} ^ {2} ] \\tag {2}", + "image_path": "676b3bfe5f50892e24f79227094c19be6b400196b0a34601a32d885928933b19.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "spans": [ + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "content": "over uniformly sampled times " + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "inline_equation", + "content": "t \\in [0,1]" + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_t = \\alpha_t \\mathbf{z}_h + \\sigma_t \\epsilon" + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "inline_equation", + "content": "\\epsilon \\sim \\mathcal{N}(0,I)" + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "content": ". There are several choices of weighting function " + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "inline_equation", + "content": "\\omega(\\lambda_t)" + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "content": ". We use the so-called v parameterization [26], " + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "inline_equation", + "content": "(1 + \\frac{\\alpha_t^2}{\\sigma_t^2})" + }, + { + "bbox": [ + 130, + 190, + 480, + 231 + ], + "type": "text", + "content": ", throughout this paper." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "spans": [ + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "type": "text", + "content": "The inference process from a trained model involves a series of sequential calls, i.e. steps, of " + }, + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{z}}_{\\theta}" + }, + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "type": "text", + "content": ", starting from " + }, + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_1 \\sim \\mathcal{N}(0, I)" + }, + { + "bbox": [ + 130, + 232, + 482, + 327 + ], + "type": "text", + "content": ", where the quality of the generated image improves monotonically with the number of steps as shown in the qualitative examples of Fig .1 and quantitative results of Fig. 3. Several methods have been proposed to reduce the number of required steps at inference time [18, 19, 28]. Here, we use the widely used DDIM sampler [28], and yet see that the performance drops drastically with an extremely low number of steps. In the following, we introduce scale distillation to alleviate this shortcoming." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 345, + 244, + 357 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 345, + 244, + 357 + ], + "spans": [ + { + "bbox": [ + 132, + 345, + 244, + 357 + ], + "type": "text", + "content": "3.2 Scale distillation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "spans": [ + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "text", + "content": "The complexity of the image super-resolution task increases with the scale factor (SF). For example, a model trained for a lower SF (" + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "inline_equation", + "content": "e.g. \\times 2" + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "text", + "content": ") takes as input a less degraded image compared to a larger SF (" + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "inline_equation", + "content": "e.g. \\times 4" + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "text", + "content": "). Therefore, a diffusion model trained for " + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 365, + 482, + 426 + ], + "type": "text", + "content": " magnification should require fewer inference steps to solve the HR image generation task compared to a model trained for the x4 scale factor." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 426, + 482, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 426, + 482, + 485 + ], + "spans": [ + { + "bbox": [ + 130, + 426, + 482, + 485 + ], + "type": "text", + "content": "To alleviate the training complexity for larger scale factors, we build on this observation and propose a progressive scale distillation training strategy. In particular, we start by training a teacher for a lower SF that takes a less degraded image as input. We then use its prediction as a target to train the model for a higher factor as a student." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "spans": [ + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "text", + "content": " be the target SF of interest. Standard training involves making pairs of low and high-resolution images, where the low-resolution image is smaller than the HR image by a factor of " + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "inline_equation", + "content": "1 / N" + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "text", + "content": ". The common approach for generating the training pairs is to gather a set of high-resolution images, perform synthetic degradation to obtain the corresponding low-resolution image and train a model that directly performs " + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "inline_equation", + "content": "\\times N" + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "text", + "content": " magnification [22, 32, 34] using eq. 2. Instead, we start by training a standard diffusion-based teacher for a lower SF, using a less degraded LR image, e.g. " + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "inline_equation", + "content": "2 / N" + }, + { + "bbox": [ + 130, + 486, + 482, + 582 + ], + "type": "text", + "content": ", as input and use its prediction to train the student." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": "More precisely, Let us assume " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{z}}_{\\phi}, \\hat{\\mathbf{z}}_{\\theta}" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": " be the teacher and student denoising models parameterized by " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\phi, \\theta" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": " respectively. To train the student for a factor of " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": ", we generate two degraded images for a given high-resolution image with factors " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "1/N, 2/N" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": ", with latent representations denoted by " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_l, \\mathbf{z}_l'" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": " respectively. That means " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_l'" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": " is less degraded compared to " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_l" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": ". Similar to the standard diffusion model training, we sample random noise at " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": " and add it to the high-resolution image to obtain " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_t" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": ". The scale distillation loss will be:" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 176, + 114, + 439, + 284 + ], + "blocks": [ + { + "bbox": [ + 176, + 114, + 439, + 284 + ], + "lines": [ + { + "bbox": [ + 176, + 114, + 439, + 284 + ], + "spans": [ + { + "bbox": [ + 176, + 114, + 439, + 284 + ], + "type": "image", + "image_path": "38a9bae124367f962eb6df6c7f926ca744746af1e5888a17a6c5ed30e9674a15.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "lines": [ + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "spans": [ + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": "Fig. 2: Training pipeline of proposed scale distillation. For a given HR image (e.g. size " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": ") shown in green, we generate two degraded versions with factors of " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "2 / N, 1 / N" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": " (e.g. sizes " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "128 \\times 128" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": "), shown in yellow and red respectively. Both degraded images are resized back via bicubic upsampling to " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": " to be used as input to the encoder, which projects them to " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "4 \\times 64 \\times 64" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": " tensors. The less and more degraded LR image is used as input to the teacher and student respectively via concatenation with the noisy version of the HR image, i.e. " + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_t" + }, + { + "bbox": [ + 131, + 292, + 482, + 392 + ], + "type": "text", + "content": ". The teacher's output is used as the target for training the student. Note that the teacher is first trained independently for a smaller magnification scale and then frozen during student training." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 198, + 434, + 481, + 454 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 198, + 434, + 481, + 454 + ], + "spans": [ + { + "bbox": [ + 198, + 434, + 481, + 454 + ], + "type": "interline_equation", + "content": "\\underset {\\theta} {\\operatorname {a r g m i n}} \\mathbb {E} _ {\\epsilon , t} [ \\omega (\\lambda_ {t}) | | \\hat {\\mathbf {z}} _ {\\theta} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l}, \\lambda_ {t}) - \\hat {\\mathbf {z}} _ {\\phi} (\\mathbf {z} _ {t}, \\mathbf {z} _ {l} ^ {\\prime}, \\lambda_ {t}) | | _ {2} ^ {2} ] \\tag {3}", + "image_path": "b0a0a8314c07a1a3192acb509dc403550287bd2002623329096aa5ab13ee73a6.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "spans": [ + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "type": "text", + "content": "where the teacher is trained for " + }, + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "type": "inline_equation", + "content": "N / 2" + }, + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "type": "text", + "content": " magnification and frozen, and the student is initialized with the teacher's weights before the training. Note that we are using the latent diffusion framework that allows exactly the same architecture and input shapes for both the teacher and the student. Although the input low-resolution images for the student and teacher are of different sizes, they are both resized to a fixed size and fed to the encoder, which projects them to a tensor with a fixed size of " + }, + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "type": "inline_equation", + "content": "4 \\times 64 \\times 64" + }, + { + "bbox": [ + 130, + 462, + 482, + 556 + ], + "type": "text", + "content": ". Fig. 2 illustrates the proposed scale distillation process." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": "The idea of scale distillation is in line with that of progressive temporal distillation [26]. While a standard denoising model would only use the final image as the target irrespective of the sampled time step " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " (see Eq. 2), both scale and progressive temporal distillation rely on the teacher to provide a supervisory signal specific for step " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " (see Eq. 3). In this way, the supervisory signal is attuned to the specific denoising step, providing stable and consistent supervision at every denoising step. Fig. 3 provides empirical support for our hypothesis. We observe a significant gap between the distilled models from " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " compared to the models that are directly trained for " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": ", respectively." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 173, + 147, + 294, + 240 + ], + "blocks": [ + { + "bbox": [ + 173, + 147, + 294, + 240 + ], + "lines": [ + { + "bbox": [ + 173, + 147, + 294, + 240 + ], + "spans": [ + { + "bbox": [ + 173, + 147, + 294, + 240 + ], + "type": "image", + "image_path": "51c5031199a3dfaae7a222b87d175103cd4bb5ac5c1ab0b85a00ec8965ca36d0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 230, + 245, + 243, + 254 + ], + "lines": [ + { + "bbox": [ + 230, + 245, + 243, + 254 + ], + "spans": [ + { + "bbox": [ + 230, + 245, + 243, + 254 + ], + "type": "text", + "content": "×4" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 313, + 148, + 435, + 240 + ], + "blocks": [ + { + "bbox": [ + 313, + 148, + 435, + 240 + ], + "lines": [ + { + "bbox": [ + 313, + 148, + 435, + 240 + ], + "spans": [ + { + "bbox": [ + 313, + 148, + 435, + 240 + ], + "type": "image", + "image_path": "3c1f8b06bd14586d60a1964f2a841bcf04589e3cd52dc0b422c69e224ca2893e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 372, + 245, + 384, + 254 + ], + "lines": [ + { + "bbox": [ + 372, + 245, + 384, + 254 + ], + "spans": [ + { + "bbox": [ + 372, + 245, + 384, + 254 + ], + "type": "text", + "content": "×8" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "lines": [ + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "spans": [ + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": "Fig. 3: FID vs. number of DDIM steps on the DIV2K validation set obtained through bicubic degradation using SD for " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " magnifications trained with scale distillation and standard training. We use " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 2 \\rightarrow \\times 4" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " scale distillation for " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": ", and compare with the standard training directly for " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " respectively. All results are obtained using the original SD decoder. The model trained with scale distillation outperforms the standard training with large margin when using fewer steps for " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": ". The gap between scale distillation and the standard training is significantly higher for small " + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 262, + 482, + 351 + ], + "type": "text", + "content": " and remains steady for large numbers steps." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 374, + 480, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 374, + 480, + 399 + ], + "spans": [ + { + "bbox": [ + 130, + 374, + 480, + 399 + ], + "type": "text", + "content": "The gap is especially striking when evaluated with few inference steps and, as expected, shrinks as the number of steps increases and quality saturates." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "spans": [ + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "text", + "content": "Similar to the temporal progressive distillation [26], the proposed scale distillation process can be applied iteratively with higher scale factors at each training step. The first student is initialized from scratch and trained on the raw data, similar to the standard training. Consequently, this student becomes the new teacher for training the next scale factor. In this paper, we consider three distillation steps up to the scale factor of " + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "text", + "content": " starting from " + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "text", + "content": ", i.e. " + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "inline_equation", + "content": "\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8" + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "text", + "content": ". As it is shown in Fig. 3, scale distillation is significantly more effective for " + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 399, + 482, + 507 + ], + "type": "text", + "content": " magnification where the LR image is of even lower quality, thereby reinforcing the importance of our proposed progressive scale training strategy." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 131, + 525, + 261, + 537 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 525, + 261, + 537 + ], + "spans": [ + { + "bbox": [ + 131, + 525, + 261, + 537 + ], + "type": "text", + "content": "3.3 Decoder fine-tuning" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": "While scale distillation improves the one-step inference noticeably, there is still a gap between the one-step model and the saturated performance with a larger number of steps, see Fig. 3. To fill this gap, we propose to fine-tune the decoder on top of the frozen one-step diffusion model resulting from scale distillation. That is, after training the diffusion model, we freeze the U-Net, apply one DDIM step for a given LR image, and use it as input to fine-tune the decoder for the SR task. We use the original loss that has been used for training the autoencoder [22]. Importantly, this fine-tuning strategy with the U-Net in place is only possible with a diffusion model that can work properly with one step as enabled by our scale distillation approach; see Table. 3. We empirically show that the" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 100 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": "combination of our scale distillation approach with decoder fine-tuning yields a one-step model that can readily compete with models requiring a large number of inference steps." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 163, + 482, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 163, + 482, + 224 + ], + "spans": [ + { + "bbox": [ + 130, + 163, + 482, + 224 + ], + "type": "text", + "content": "Implementation details. We use Stable diffusion v1.5 as our backbone and initialize our teacher with the text-to-image model. We use our own implementation of the v-parameterization with a cosine schedule. We use 4 A100 GPUs for all our experiments and train with a batch size of 60 with a gradient accumulation factor of 4." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 131, + 240, + 230, + 254 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 240, + 230, + 254 + ], + "spans": [ + { + "bbox": [ + 131, + 240, + 230, + 254 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "spans": [ + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "text", + "content": "In this section, we evaluate our YONOS-SR against other methods targeting real image super-resolution at the standard " + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "text", + "content": " scale factor in Sec. 4.1 and demonstrate that our proposed scale distillation approach generalizes to higher scale factors of " + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "text", + "content": " in Sec. 4.2. We then provide qualitative results for " + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 264, + 482, + 338 + ], + "type": "text", + "content": " in Sec. 4.3. Finally, we perform ablation studies to highlight the role of our main contributions in Sec. 4.4." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 354, + 371, + 366 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 354, + 371, + 366 + ], + "spans": [ + { + "bbox": [ + 131, + 354, + 371, + 366 + ], + "type": "text", + "content": "4.1 Evaluation on real image super resolution" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 131, + 372, + 481, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 372, + 481, + 397 + ], + "spans": [ + { + "bbox": [ + 131, + 372, + 481, + 397 + ], + "type": "text", + "content": "We begin by evaluating the performance of our proposed YONOS-SR model in the standard real image super-resolution setting targeting " + }, + { + "bbox": [ + 131, + 372, + 481, + 397 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 131, + 372, + 481, + 397 + ], + "type": "text", + "content": " scale factor." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 407, + 481, + 454 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 407, + 481, + 454 + ], + "spans": [ + { + "bbox": [ + 130, + 407, + 481, + 454 + ], + "type": "text", + "content": "Datasets. Following previous work (e.g. [2,32,34,41]), we use DIV2K [1], DIV8K [7], Flickr2k [30], OST [36] and a subset of 10K images from FFHQ training set [13] to train our model. We adopt the Real-ESRGAN [34] degradation pipeline to generate synthetic LR-HR pairs." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 455, + 482, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 455, + 482, + 529 + ], + "spans": [ + { + "bbox": [ + 130, + 455, + 482, + 529 + ], + "type": "text", + "content": "We then evaluate our model on both synthetic and real datasets. Similar to [32], we use 3K LR-HR (128 → 512) pairs synthesized from the DIV2K validation set using the Real-ESRGAN degradation pipeline as our synthetic dataset. We also report results on the standard DIV2K validation split with bicubic degradations for completeness. For the real dataset, we use " + }, + { + "bbox": [ + 130, + 455, + 482, + 529 + ], + "type": "inline_equation", + "content": "128 \\times 128" + }, + { + "bbox": [ + 130, + 455, + 482, + 529 + ], + "type": "text", + "content": " center crops from the RealSR [11], DRealSR [38] and DPED-iphone [10] datasets." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 131, + 538, + 481, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 538, + 481, + 586 + ], + "spans": [ + { + "bbox": [ + 131, + 538, + 481, + 586 + ], + "type": "text", + "content": "Evaluation metrics. We evaluate using various perceptual and image quality metrics, including LPIPS [43], FID [9] (where applicable), as well as the no-reference image quality metric, MUSIQ [14]. For the synthetic datasets, we also report standard PSNR and SSIM metrics, for reference." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 597, + 482, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 597, + 482, + 671 + ], + "spans": [ + { + "bbox": [ + 130, + 597, + 482, + 671 + ], + "type": "text", + "content": "Baselines. As the main contribution of our paper targets improving the inference process of diffusion-based super-resolution, our main points of comparison are diffusion-based SR models, including the recent StableSR model [32], ReshShift [40], and the original LDM model [22]. For completeness, we also include comparison to other non-diffusion-based baselines, including; RealSR [11], BSRGAN [41], RealESRGAN [34], DASR [16] and FeMaSR [2]." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 134, + 114, + 476, + 213 + ], + "blocks": [ + { + "bbox": [ + 134, + 114, + 476, + 213 + ], + "lines": [ + { + "bbox": [ + 134, + 114, + 476, + 213 + ], + "spans": [ + { + "bbox": [ + 134, + 114, + 476, + 213 + ], + "type": "table", + "html": "
DatasetsMetricsRealSRBSRGANDASRReal-ESRGAN +FeMaSRLDMResShiftStableSRYONOS (ours)
DIV2K Valid RealESRGAN degradationsFID ↓49.4944.2249.1637.6435.8726.4730.4524.4421.86
LPIPS ↓0.52760.33510.35430.31120.31990.25100.30760.31140.2310
PSNR ↑24.6224.5824.4724.2823.0623.3224.6223.2624.74
SSIM ↑0.59700.62690.63040.63720.58870.57620.62100.57260.6428
MUSIQ ↑28.5761.1955.1961.0560.8362.2763.5865.9270.30
DIV2K Valid bicubic degradationsLPIPS ↓-0.23640.16960.2284-0.23230.17750.25800.1703
PSNR ↑-27.3228.5526.65-25.4927.2421.9026.26
RealSRLPIPS ↓0.35700.26560.31340.27090.29370.31590.32790.30020.2479
MUSIQ ↑38.2663.2841.2160.3659.0658.9059.8765.8869.21
DRealSRLPIPS ↓0.39380.28580.30990.28180.31570.33790.38700.32840.2721
MUSIQ ↑26.9357.1642.4154.2653.7153.7254.1358.5166.26
DPED-iphoneMUSIQ ↑45.6045.8932.6842.4249.9544.2338.5950.4859.45
-# STEPS ↓-----20042001
", + "image_path": "77edf0af07cb2406b2fa6b1e728f9e81d03646b18605fae66c2fcb2ea222cfb4.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 214, + 480, + 246 + ], + "lines": [ + { + "bbox": [ + 132, + 214, + 480, + 246 + ], + "spans": [ + { + "bbox": [ + 132, + 214, + 480, + 246 + ], + "type": "text", + "content": "Table 1: Comparison to baselines. Results in Red and Blue correspond to best and second best results, resp. Cells with - indicate that there were no previously reported results using the considered baseline and corresponding metric." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 130, + 273, + 480, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 273, + 480, + 357 + ], + "spans": [ + { + "bbox": [ + 130, + 273, + 480, + 357 + ], + "type": "text", + "content": "Results. Results summarized in Tab. 1 show that YONOS-SR outperforms all other diffusion-based SR methods, while using only one inference step, whereas other alternatives use 200 inference steps. These results highlight the efficiency of YONOS-SR in reducing the number of steps to one without compromising performance but indeed improving it further. Also, our model outperforms all considered baselines in 5 out of 7 metrics on the synthetic data and all comparison points on the real datasets." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 373, + 348, + 385 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 373, + 348, + 385 + ], + "spans": [ + { + "bbox": [ + 132, + 373, + 348, + 385 + ], + "type": "text", + "content": "4.2 Generalization to higher scale factors" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "spans": [ + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": "We now evaluate the generalization capability of our proposed scale distillation approach. To this end, we train our YONOS-SR model with one more iteration of scale distillation, thereby going from a model capable of handling " + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": " magnifications to " + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": " magnifications. We then fine-tune the decoder on top of the one-step " + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": " diffusion model. To evaluate this model, we follow recent work [3], and evaluate on the same subset of ImageNet and FFHQ for " + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": " magnification, i.e. " + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "inline_equation", + "content": "64 \\times 64 \\rightarrow 512 \\times 512" + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": ". In particular, we select the same 1k subset of ImageNet test set by first ordering the 10k images by name and then selecting the 1k subset via interleaved sampling, i.e. using images of index 0, 10, 20, etc. To obtain the LR-HR pairs, we use " + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 391, + 481, + 534 + ], + "type": "text", + "content": " average pooling degradations. In the case of FFHQ, we use the first 1k images of the validation set. We also evaluate using the same metrics and baselines reported in this recent work [3]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 535, + 481, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 535, + 481, + 606 + ], + "spans": [ + { + "bbox": [ + 130, + 535, + 481, + 606 + ], + "type": "text", + "content": "The results summarized in Tab. 2 demonstrate that our proposed one-step method generalizes well to higher scale factors, where it is able to achieve good results in terms of FID and LPIPS scores, which are known to better align with human observation, especially at higher magnification factors [24]. Notably, unlike baselines, our model has not been trained on ImageNet data. We use only " + }, + { + "bbox": [ + 130, + 535, + 481, + 606 + ], + "type": "inline_equation", + "content": "10\\mathrm{k}" + }, + { + "bbox": [ + 130, + 535, + 481, + 606 + ], + "type": "text", + "content": " images of FFHQ in our training set." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 623, + 271, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 623, + 271, + 635 + ], + "spans": [ + { + "bbox": [ + 132, + 623, + 271, + 635 + ], + "type": "text", + "content": "4.3 Qualitative evaluation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "text", + "content": "In addition to extensive quantitative evaluations, we qualitatively compare one-step YONOS-SR with 200-step StableSR and standard diffusion-based SR (SD-" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 149, + 135, + 223, + 209 + ], + "blocks": [ + { + "bbox": [ + 149, + 135, + 223, + 209 + ], + "lines": [ + { + "bbox": [ + 149, + 135, + 223, + 209 + ], + "spans": [ + { + "bbox": [ + 149, + 135, + 223, + 209 + ], + "type": "image", + "image_path": "c771e3ae9778fc241b9b90ee0fee4a35e24bd82df655e6a0419a959874d1b029.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 181, + 212, + 194, + 223 + ], + "lines": [ + { + "bbox": [ + 181, + 212, + 194, + 223 + ], + "spans": [ + { + "bbox": [ + 181, + 212, + 194, + 223 + ], + "type": "text", + "content": "(a)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 225, + 137, + 298, + 209 + ], + "blocks": [ + { + "bbox": [ + 225, + 137, + 298, + 209 + ], + "lines": [ + { + "bbox": [ + 225, + 137, + 298, + 209 + ], + "spans": [ + { + "bbox": [ + 225, + 137, + 298, + 209 + ], + "type": "image", + "image_path": "908bb75aaa052e5444c7a7d6f4693f968c94ac6f805741c821bf75acd5fdb5fb.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 255, + 212, + 268, + 223 + ], + "lines": [ + { + "bbox": [ + 255, + 212, + 268, + 223 + ], + "spans": [ + { + "bbox": [ + 255, + 212, + 268, + 223 + ], + "type": "text", + "content": "(b)" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 299, + 137, + 373, + 209 + ], + "blocks": [ + { + "bbox": [ + 299, + 137, + 373, + 209 + ], + "lines": [ + { + "bbox": [ + 299, + 137, + 373, + 209 + ], + "spans": [ + { + "bbox": [ + 299, + 137, + 373, + 209 + ], + "type": "image", + "image_path": "2c167900d29cb02d87af2202ec7c2be66e3c7961e6f56ec644423dccb60b58f0.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 331, + 212, + 342, + 223 + ], + "lines": [ + { + "bbox": [ + 331, + 212, + 342, + 223 + ], + "spans": [ + { + "bbox": [ + 331, + 212, + 342, + 223 + ], + "type": "text", + "content": "(c)" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 374, + 137, + 447, + 210 + ], + "blocks": [ + { + "bbox": [ + 374, + 137, + 447, + 210 + ], + "lines": [ + { + "bbox": [ + 374, + 137, + 447, + 210 + ], + "spans": [ + { + "bbox": [ + 374, + 137, + 447, + 210 + ], + "type": "image", + "image_path": "9aa69fe72f9a8d70bebe6b5bb9b9f3aff5336855020634833dca2ceedc0d87ee.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 405, + 212, + 417, + 223 + ], + "lines": [ + { + "bbox": [ + 405, + 212, + 417, + 223 + ], + "spans": [ + { + "bbox": [ + 405, + 212, + 417, + 223 + ], + "type": "text", + "content": "(d)" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 149, + 245, + 223, + 318 + ], + "blocks": [ + { + "bbox": [ + 149, + 245, + 223, + 318 + ], + "lines": [ + { + "bbox": [ + 149, + 245, + 223, + 318 + ], + "spans": [ + { + "bbox": [ + 149, + 245, + 223, + 318 + ], + "type": "image", + "image_path": "a8cc9a86546513622047ec53b04d2ac89b96a36edd9fe6b1fab1a2d5eade7f05.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 181, + 321, + 193, + 331 + ], + "lines": [ + { + "bbox": [ + 181, + 321, + 193, + 331 + ], + "spans": [ + { + "bbox": [ + 181, + 321, + 193, + 331 + ], + "type": "text", + "content": "(a)" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 225, + 245, + 298, + 318 + ], + "blocks": [ + { + "bbox": [ + 225, + 245, + 298, + 318 + ], + "lines": [ + { + "bbox": [ + 225, + 245, + 298, + 318 + ], + "spans": [ + { + "bbox": [ + 225, + 245, + 298, + 318 + ], + "type": "image", + "image_path": "67e1ef51f8bf256f3f80987f81668d90462a5ea3be86686fbc5dab64216b99ed.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 255, + 321, + 268, + 331 + ], + "lines": [ + { + "bbox": [ + 255, + 321, + 268, + 331 + ], + "spans": [ + { + "bbox": [ + 255, + 321, + 268, + 331 + ], + "type": "text", + "content": "(b)" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 299, + 245, + 372, + 318 + ], + "blocks": [ + { + "bbox": [ + 299, + 245, + 372, + 318 + ], + "lines": [ + { + "bbox": [ + 299, + 245, + 372, + 318 + ], + "spans": [ + { + "bbox": [ + 299, + 245, + 372, + 318 + ], + "type": "image", + "image_path": "acdf924761d980a659c116994952304cf7fa3f2974ab5e37c54ae4460be1a618.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 331, + 321, + 342, + 331 + ], + "lines": [ + { + "bbox": [ + 331, + 321, + 342, + 331 + ], + "spans": [ + { + "bbox": [ + 331, + 321, + 342, + 331 + ], + "type": "text", + "content": "(c)" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 374, + 245, + 447, + 318 + ], + "blocks": [ + { + "bbox": [ + 374, + 245, + 447, + 318 + ], + "lines": [ + { + "bbox": [ + 374, + 245, + 447, + 318 + ], + "spans": [ + { + "bbox": [ + 374, + 245, + 447, + 318 + ], + "type": "image", + "image_path": "3e27db1e4bee637ca789fa88e1c4dd09ec029857d4e5a7c777d54ce579395533.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 405, + 321, + 417, + 331 + ], + "lines": [ + { + "bbox": [ + 405, + 321, + 417, + 331 + ], + "spans": [ + { + "bbox": [ + 405, + 321, + 417, + 331 + ], + "type": "text", + "content": "(d)" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 149, + 354, + 223, + 427 + ], + "blocks": [ + { + "bbox": [ + 149, + 354, + 223, + 427 + ], + "lines": [ + { + "bbox": [ + 149, + 354, + 223, + 427 + ], + "spans": [ + { + "bbox": [ + 149, + 354, + 223, + 427 + ], + "type": "image", + "image_path": "0c5cba40bd1c90df8aaff15ce11537aa80574905614ae618298e4a0d91bf988d.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 181, + 430, + 193, + 441 + ], + "lines": [ + { + "bbox": [ + 181, + 430, + 193, + 441 + ], + "spans": [ + { + "bbox": [ + 181, + 430, + 193, + 441 + ], + "type": "text", + "content": "(a)" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 225, + 354, + 298, + 427 + ], + "blocks": [ + { + "bbox": [ + 225, + 354, + 298, + 427 + ], + "lines": [ + { + "bbox": [ + 225, + 354, + 298, + 427 + ], + "spans": [ + { + "bbox": [ + 225, + 354, + 298, + 427 + ], + "type": "image", + "image_path": "c61e86ace96644ff4ed22f4c84c64aeaed8093e6b634898e407cec2c5d7c38e3.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 255, + 430, + 268, + 441 + ], + "lines": [ + { + "bbox": [ + 255, + 430, + 268, + 441 + ], + "spans": [ + { + "bbox": [ + 255, + 430, + 268, + 441 + ], + "type": "text", + "content": "(b)" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 299, + 354, + 373, + 427 + ], + "blocks": [ + { + "bbox": [ + 299, + 354, + 373, + 427 + ], + "lines": [ + { + "bbox": [ + 299, + 354, + 373, + 427 + ], + "spans": [ + { + "bbox": [ + 299, + 354, + 373, + 427 + ], + "type": "image", + "image_path": "6b75b67edbfd4d38a09947eed4abbceab0f002956fdd1e669d0034f232669cd3.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 331, + 430, + 342, + 441 + ], + "lines": [ + { + "bbox": [ + 331, + 430, + 342, + 441 + ], + "spans": [ + { + "bbox": [ + 331, + 430, + 342, + 441 + ], + "type": "text", + "content": "(c)" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 374, + 354, + 447, + 427 + ], + "blocks": [ + { + "bbox": [ + 374, + 354, + 447, + 427 + ], + "lines": [ + { + "bbox": [ + 374, + 354, + 447, + 427 + ], + "spans": [ + { + "bbox": [ + 374, + 354, + 447, + 427 + ], + "type": "image", + "image_path": "76b209a0f120a6784bc8dcb9da09754e330bac7d1b4801e55b8974cb0b3efa99.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 405, + 430, + 417, + 441 + ], + "lines": [ + { + "bbox": [ + 405, + 430, + 417, + 441 + ], + "spans": [ + { + "bbox": [ + 405, + 430, + 417, + 441 + ], + "type": "text", + "content": "(d)" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 130, + 449, + 482, + 525 + ], + "lines": [ + { + "bbox": [ + 130, + 449, + 482, + 525 + ], + "spans": [ + { + "bbox": [ + 130, + 449, + 482, + 525 + ], + "type": "text", + "content": "Fig. 4: Qualitative comparison on the validation set of DIV2K dataset: (a) 200-step StableSR (b) 200-step standard SD-SR (c) 1-step YONOS(ours) (d) the ground truth. SD-SR represents the standard Stable Diffusion-based SR model. 200-step StableSR and SD-SR tend to over-sharpen, adding artifacts that do not match the ground truth content. Our SR images match the most with the corresponding ground truth image; see the faces, Pepsi, and crocodile textures in the first, second, and third rows, respectively. The images are best seen in a display and zoomed in." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "bbox": [ + 130, + 542, + 482, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 542, + 482, + 590 + ], + "spans": [ + { + "bbox": [ + 130, + 542, + 482, + 590 + ], + "type": "text", + "content": "SR) in Fig. 4. Our method generates the closest SR images to the ground truth in terms of detailed textures while taking only 1-step during the inference. These observations are in line with the numerical superiority of our method in the quantitative evaluations above." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "text", + "content": "As it is clearly demonstrated in Fig. 3, scale distillation is even more effective for " + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "text", + "content": " compared to " + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "text", + "content": " magnification. As a qualitative support, we compare the model trained directly for " + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "text", + "content": " magnification without scale distillation to our model trained with three iterations of scale distillation " + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\times 2\\rightarrow \\times 4\\rightarrow \\times 8" + }, + { + "bbox": [ + 130, + 594, + 482, + 665 + ], + "type": "text", + "content": " in Fig. 5. Again, we use the validation set of DIV2K dataset. In line with the numerical analyses in Fig. 3, we observe that the model trained with scale distillation out-" + } + ] + } + ], + "index": 28 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 143, + 206, + 218 + ], + "blocks": [ + { + "bbox": [ + 133, + 143, + 206, + 218 + ], + "lines": [ + { + "bbox": [ + 133, + 143, + 206, + 218 + ], + "spans": [ + { + "bbox": [ + 133, + 143, + 206, + 218 + ], + "type": "image", + "image_path": "1593eaecf963deb3f2ca889ae6bd00e11a858ceb3d06d5fd5f0a9f652b65d7bf.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 159, + 220, + 180, + 232 + ], + "lines": [ + { + "bbox": [ + 159, + 220, + 180, + 232 + ], + "spans": [ + { + "bbox": [ + 159, + 220, + 180, + 232 + ], + "type": "text", + "content": "(LR)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 225, + 143, + 299, + 218 + ], + "blocks": [ + { + "bbox": [ + 210, + 141, + 220, + 216 + ], + "lines": [ + { + "bbox": [ + 210, + 141, + 220, + 216 + ], + "spans": [ + { + "bbox": [ + 210, + 141, + 220, + 216 + ], + "type": "text", + "content": "8 8" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 225, + 143, + 299, + 218 + ], + "lines": [ + { + "bbox": [ + 225, + 143, + 299, + 218 + ], + "spans": [ + { + "bbox": [ + 225, + 143, + 299, + 218 + ], + "type": "image", + "image_path": "4bd1fc4401795e4edfcd4e705006a0a6a967c3b794a42e0ae62c96c756c39459.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 220, + 282, + 232 + ], + "lines": [ + { + "bbox": [ + 242, + 220, + 282, + 232 + ], + "spans": [ + { + "bbox": [ + 242, + 220, + 282, + 232 + ], + "type": "text", + "content": "(64 steps)" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 299, + 144, + 373, + 218 + ], + "blocks": [ + { + "bbox": [ + 299, + 144, + 373, + 218 + ], + "lines": [ + { + "bbox": [ + 299, + 144, + 373, + 218 + ], + "spans": [ + { + "bbox": [ + 299, + 144, + 373, + 218 + ], + "type": "image", + "image_path": "c67e480cb8323b088bb764ad4f7ee50accb7984e147e4253ae94c0eedb925271.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 220, + 353, + 232 + ], + "lines": [ + { + "bbox": [ + 317, + 220, + 353, + 232 + ], + "spans": [ + { + "bbox": [ + 317, + 220, + 353, + 232 + ], + "type": "text", + "content": "(4 steps)" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 374, + 144, + 446, + 218 + ], + "blocks": [ + { + "bbox": [ + 374, + 144, + 446, + 218 + ], + "lines": [ + { + "bbox": [ + 374, + 144, + 446, + 218 + ], + "spans": [ + { + "bbox": [ + 374, + 144, + 446, + 218 + ], + "type": "image", + "image_path": "c83f145995c8d7e7db504f1d8c9d855cf4f0c13475a5d877874301187ca874c9.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 394, + 220, + 425, + 232 + ], + "lines": [ + { + "bbox": [ + 394, + 220, + 425, + 232 + ], + "spans": [ + { + "bbox": [ + 394, + 220, + 425, + 232 + ], + "type": "text", + "content": "(1 step)" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 132, + 252, + 207, + 327 + ], + "blocks": [ + { + "bbox": [ + 132, + 252, + 207, + 327 + ], + "lines": [ + { + "bbox": [ + 132, + 252, + 207, + 327 + ], + "spans": [ + { + "bbox": [ + 132, + 252, + 207, + 327 + ], + "type": "image", + "image_path": "98e449cae1f0a5894b1a17e3f8e6654bc6a5976f197a660dba5ab10a955c4ba2.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 159, + 329, + 181, + 340 + ], + "lines": [ + { + "bbox": [ + 159, + 329, + 181, + 340 + ], + "spans": [ + { + "bbox": [ + 159, + 329, + 181, + 340 + ], + "type": "text", + "content": "(HR)" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 225, + 253, + 299, + 327 + ], + "blocks": [ + { + "bbox": [ + 208, + 252, + 225, + 319 + ], + "lines": [ + { + "bbox": [ + 208, + 252, + 225, + 319 + ], + "spans": [ + { + "bbox": [ + 208, + 252, + 225, + 319 + ], + "type": "text", + "content": "eannnnnne" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 225, + 253, + 299, + 327 + ], + "lines": [ + { + "bbox": [ + 225, + 253, + 299, + 327 + ], + "spans": [ + { + "bbox": [ + 225, + 253, + 299, + 327 + ], + "type": "image", + "image_path": "60dafed7a0530187c0f075172d34e87e7c33c274bd6e0e29cc367efd02ec18d9.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 329, + 282, + 340 + ], + "lines": [ + { + "bbox": [ + 242, + 329, + 282, + 340 + ], + "spans": [ + { + "bbox": [ + 242, + 329, + 282, + 340 + ], + "type": "text", + "content": "(64 steps)" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 299, + 253, + 372, + 327 + ], + "blocks": [ + { + "bbox": [ + 299, + 253, + 372, + 327 + ], + "lines": [ + { + "bbox": [ + 299, + 253, + 372, + 327 + ], + "spans": [ + { + "bbox": [ + 299, + 253, + 372, + 327 + ], + "type": "image", + "image_path": "48f6340e5ad851e9eb417d20b808938e21affb4f3c7a8030500edd54a68286a5.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 329, + 353, + 340 + ], + "lines": [ + { + "bbox": [ + 317, + 329, + 353, + 340 + ], + "spans": [ + { + "bbox": [ + 317, + 329, + 353, + 340 + ], + "type": "text", + "content": "(4 steps)" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 373, + 253, + 446, + 327 + ], + "blocks": [ + { + "bbox": [ + 373, + 253, + 446, + 327 + ], + "lines": [ + { + "bbox": [ + 373, + 253, + 446, + 327 + ], + "spans": [ + { + "bbox": [ + 373, + 253, + 446, + 327 + ], + "type": "image", + "image_path": "18fbc3a4f44ed175fc74a49abbb31236e4fe684ce6761085383fedce2ea75791.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 394, + 329, + 425, + 340 + ], + "lines": [ + { + "bbox": [ + 394, + 329, + 425, + 340 + ], + "spans": [ + { + "bbox": [ + 394, + 329, + 425, + 340 + ], + "type": "text", + "content": "(1 step)" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 132, + 360, + 207, + 435 + ], + "blocks": [ + { + "bbox": [ + 132, + 360, + 207, + 435 + ], + "lines": [ + { + "bbox": [ + 132, + 360, + 207, + 435 + ], + "spans": [ + { + "bbox": [ + 132, + 360, + 207, + 435 + ], + "type": "image", + "image_path": "17d41dc09b3261d9601d38f6881aa572f1c1b2862d5cf3e68ae6ec28d6589c15.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 159, + 437, + 180, + 449 + ], + "lines": [ + { + "bbox": [ + 159, + 437, + 180, + 449 + ], + "spans": [ + { + "bbox": [ + 159, + 437, + 180, + 449 + ], + "type": "text", + "content": "(LR)" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 210, + 359, + 221, + 434 + ], + "lines": [ + { + "bbox": [ + 210, + 359, + 221, + 434 + ], + "spans": [ + { + "bbox": [ + 210, + 359, + 221, + 434 + ], + "type": "text", + "content": "aee" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 225, + 361, + 299, + 435 + ], + "blocks": [ + { + "bbox": [ + 225, + 361, + 299, + 435 + ], + "lines": [ + { + "bbox": [ + 225, + 361, + 299, + 435 + ], + "spans": [ + { + "bbox": [ + 225, + 361, + 299, + 435 + ], + "type": "image", + "image_path": "6dd96e572e8410ae31f707ec7f7d895970dbc3b6ef1fd587093e982f66422b73.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 437, + 282, + 449 + ], + "lines": [ + { + "bbox": [ + 242, + 437, + 282, + 449 + ], + "spans": [ + { + "bbox": [ + 242, + 437, + 282, + 449 + ], + "type": "text", + "content": "(64 steps)" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 299, + 361, + 372, + 435 + ], + "blocks": [ + { + "bbox": [ + 299, + 361, + 372, + 435 + ], + "lines": [ + { + "bbox": [ + 299, + 361, + 372, + 435 + ], + "spans": [ + { + "bbox": [ + 299, + 361, + 372, + 435 + ], + "type": "image", + "image_path": "471d2a50f6ca06a1626076b9868d9041ab7b9490b3d477bc2b60ea97028454bc.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 437, + 353, + 449 + ], + "lines": [ + { + "bbox": [ + 317, + 437, + 353, + 449 + ], + "spans": [ + { + "bbox": [ + 317, + 437, + 353, + 449 + ], + "type": "text", + "content": "(4 steps)" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 373, + 361, + 446, + 435 + ], + "blocks": [ + { + "bbox": [ + 373, + 361, + 446, + 435 + ], + "lines": [ + { + "bbox": [ + 373, + 361, + 446, + 435 + ], + "spans": [ + { + "bbox": [ + 373, + 361, + 446, + 435 + ], + "type": "image", + "image_path": "b3e30bf510e75e692415248f6a526f56d5a9d347630a644c665c472c78ab6f77.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 394, + 437, + 425, + 449 + ], + "lines": [ + { + "bbox": [ + 394, + 437, + 425, + 449 + ], + "spans": [ + { + "bbox": [ + 394, + 437, + 425, + 449 + ], + "type": "text", + "content": "(1 step)" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 132, + 469, + 207, + 543 + ], + "blocks": [ + { + "bbox": [ + 132, + 469, + 207, + 543 + ], + "lines": [ + { + "bbox": [ + 132, + 469, + 207, + 543 + ], + "spans": [ + { + "bbox": [ + 132, + 469, + 207, + 543 + ], + "type": "image", + "image_path": "73b71fd4760d74e9535b5c910440e83be3f6733c46e453fbf084325f412cb714.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 159, + 546, + 180, + 557 + ], + "lines": [ + { + "bbox": [ + 159, + 546, + 180, + 557 + ], + "spans": [ + { + "bbox": [ + 159, + 546, + 180, + 557 + ], + "type": "text", + "content": "(HR)" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_caption" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 225, + 470, + 299, + 544 + ], + "blocks": [ + { + "bbox": [ + 208, + 469, + 225, + 537 + ], + "lines": [ + { + "bbox": [ + 208, + 469, + 225, + 537 + ], + "spans": [ + { + "bbox": [ + 208, + 469, + 225, + 537 + ], + "type": "text", + "content": "Scale distillation " + }, + { + "bbox": [ + 208, + 469, + 225, + 537 + ], + "type": "inline_equation", + "content": "\\times 2\\uparrow \\uparrow \\times 4\\times 8" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 225, + 470, + 299, + 544 + ], + "lines": [ + { + "bbox": [ + 225, + 470, + 299, + 544 + ], + "spans": [ + { + "bbox": [ + 225, + 470, + 299, + 544 + ], + "type": "image", + "image_path": "1b523942e03785179fa398d3e42a6a78ee1dbbc3616de7d420c03e66ff55d182.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 546, + 282, + 557 + ], + "lines": [ + { + "bbox": [ + 242, + 546, + 282, + 557 + ], + "spans": [ + { + "bbox": [ + 242, + 546, + 282, + 557 + ], + "type": "text", + "content": "(64 steps)" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "lines": [ + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "spans": [ + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "text", + "content": "Fig. 5: Qualitative comparison on the validation set of DIV2K dataset for " + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "text", + "content": " magnification when the model is trained directly for " + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "text", + "content": " magnification without scale distillation (top row) and with three iterations of scale distillation " + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "inline_equation", + "content": "\\times 2\\rightarrow \\times 4\\rightarrow \\times 8" + }, + { + "bbox": [ + 131, + 565, + 482, + 654 + ], + "type": "text", + "content": " (bottom row). We show the input LR image results with 1, 4, and 64 steps using the original decoder and the corresponding HR image for both models. The model trained with scale distillation outperforms the standard training with high margins. Specifically, due to poor LR input, the standard training fails to recover the relevant content. The images are best seen in a display and zoomed in." + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_caption" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 299, + 470, + 372, + 544 + ], + "blocks": [ + { + "bbox": [ + 299, + 470, + 372, + 544 + ], + "lines": [ + { + "bbox": [ + 299, + 470, + 372, + 544 + ], + "spans": [ + { + "bbox": [ + 299, + 470, + 372, + 544 + ], + "type": "image", + "image_path": "74a5aa44e0bd85e98a0dcacdf7383c49daeb3227549af0f5b966c487b1d24a94.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 546, + 353, + 557 + ], + "lines": [ + { + "bbox": [ + 317, + 546, + 353, + 557 + ], + "spans": [ + { + "bbox": [ + 317, + 546, + 353, + 557 + ], + "type": "text", + "content": "(4 steps)" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 373, + 470, + 446, + 544 + ], + "blocks": [ + { + "bbox": [ + 373, + 470, + 446, + 544 + ], + "lines": [ + { + "bbox": [ + 373, + 470, + 446, + 544 + ], + "spans": [ + { + "bbox": [ + 373, + 470, + 446, + 544 + ], + "type": "image", + "image_path": "2ce0dcc22a0d4bcf0b451c9b644357d74b96bb470c8d8cb445e7c90d66ac3ff1.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 394, + 546, + 425, + 557 + ], + "lines": [ + { + "bbox": [ + 394, + 546, + 425, + 557 + ], + "spans": [ + { + "bbox": [ + 394, + 546, + 425, + 557 + ], + "type": "text", + "content": "(1 step)" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_caption" + } + ], + "index": 36 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 100 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 178, + 114, + 434, + 199 + ], + "blocks": [ + { + "bbox": [ + 178, + 114, + 434, + 199 + ], + "lines": [ + { + "bbox": [ + 178, + 114, + 434, + 199 + ], + "spans": [ + { + "bbox": [ + 178, + 114, + 434, + 199 + ], + "type": "table", + "html": "
ImagenetFFHQ
FID ↓LPIPS ↓PSNR ↑FID ↓LPIPS ↓PSNR ↑
LDPS61.090.47523.2136.810.29228.78
GML-DPS [23]60.360.45623.2141.650.31828.50
PSLD [23]60.810.47123.1736.930.33526.62
LDIR [8]63.460.48022.2336.040.34525.79
P2L [3]51.810.38623.3831.230.29028.55
YONOS (ours)34.590.24122.8021.410.16126.08
", + "image_path": "e73d077503b500998ad5e8446455621bdc1ceb203c0ee54640edc36f86995099.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 200, + 479, + 222 + ], + "lines": [ + { + "bbox": [ + 132, + 200, + 479, + 222 + ], + "spans": [ + { + "bbox": [ + 132, + 200, + 479, + 222 + ], + "type": "text", + "content": "Table 2: Comparison to baselines on ImageNet subset with x8 magnification factor. The results for other methods are taken from [3]." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "spans": [ + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "text", + "content": "performs the standard training in terms of recovering the corresponding content and details. Note that, the problem of " + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "text", + "content": " magnification is of significantly higher complexity compared to " + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "text", + "content": " due to poor LR input. Notable for these " + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 228, + 480, + 287 + ], + "type": "text", + "content": " qualitative evaluations we use the original decoder (i.e. these results are obtained before the decoder finetuning stage) to emphasize the impact of scale distillation." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 300, + 236, + 313 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 300, + 236, + 313 + ], + "spans": [ + { + "bbox": [ + 132, + 300, + 236, + 313 + ], + "type": "text", + "content": "4.4 Ablation study" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "spans": [ + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "text", + "content": "We now study the impact of the various components introduced in our work. To this end, we use the standard DIV2K validation set with " + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "text", + "content": " low-resolution images obtained through bicubic degradation [1]. We use the FID metric as it is a standard metric for assessing the quality of generative models. Our initial investigation also revealed that FID correlates the most with the human evaluation of the generated images. The validation set of the DIV2K dataset includes only 100 samples. To obtain more reliable FID scores, we extract 30 random " + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "inline_equation", + "content": "128 \\times 128" + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "text", + "content": " patches and their corresponding " + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 130, + 319, + 480, + 438 + ], + "type": "text", + "content": " HR counterparts from each image in the standard DIV2K bicubic validation set, resulting in a total of 3k LR-HR pairs. For completeness, we also report LPIPS, PSNR, and SSIM scores." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "spans": [ + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": "Impact of scale distillation. We begin by evaluating the impact of our proposed scale distillation on speeding up inference time. To this end, we run two stable diffusion (SD) models trained for " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " super-resolution (SR), with various numbers of inference steps. The first model is a standard SD super-resolution model trained directly for target " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " super-resolution (i.e. SD-SR), while the second model is trained with our proposed scale distillation from " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " magnification to " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": ". We use the same model, training set, and degradation pipeline in training both models. The only difference is the use of our scale distillation in the later model. Specifically, we start with training a teacher for " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " magnification using raw data as a denoising target. We use the " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 2" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " model as a frozen teacher and use its prediction to train a student for " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " magnification. The results summarized in Fig. 3 speaks decisively in favor of our scale distillation approach. We can see that the model trained with the proposed scale distillation performs significantly better than direct " + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 450, + 480, + 617 + ], + "type": "text", + "content": " training when using only one step." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "text", + "content": "Scale distillation outperforms the standard training more significantly for " + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "text", + "content": " magnification where we perform three training iterations for scale distillation, i.e. " + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "inline_equation", + "content": "\\times 2 \\rightarrow \\times 4 \\rightarrow \\times 8" + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "text", + "content": ". One reason for the larger gap for " + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "text", + "content": " magnification is that the SR task is more ambiguous for " + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 617, + 480, + 665 + ], + "type": "text", + "content": " magnification due to lower quality input." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "type": "text", + "content": "As a result, the model benefits more from the more simplified supervisory signal obtained from scale distillation. Note that we use the original SD decoder (i.e. no decoder finetuning) for this experiment to analyze the impact of the scale distillation independently of decoder fine-tuning." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 178, + 303, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 178, + 303, + 346 + ], + "spans": [ + { + "bbox": [ + 130, + 178, + 303, + 346 + ], + "type": "text", + "content": "Impact of decoder fine-tuning. One of the direct consequences of having a diffusion model that can yield good results in one denoising step is that it allows for decoder fine-tuning with the U-Net in place, as it will directly give a good starting point to the decoder. To validate the importance of the input given to the decoder prior to fine-tuning and, thereby, the importance of YONOS-SR, we experiment with the standard SD-SR model and our scale distillation model. In both cases, we freeze the U-Net and only allow the" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 346, + 481, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 346, + 481, + 370 + ], + "spans": [ + { + "bbox": [ + 130, + 346, + 481, + 370 + ], + "type": "text", + "content": "models to do 1 denoising step. We then feed their output to the decoder and fine-tune it following the same loss used in the original stable diffusion model [22]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "spans": [ + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "type": "text", + "content": "The results summarized in Tab. 3 validate the importance of having a good initial input from the diffusion model prior to decoder fine-tuning. The left chunk shows that the model trained with scale distillation outperforms the standard training with a good margin when using the original decoder, indicating that the scale distillation results in a U-Net that provides a higher quality input for the decoder. Moreover, as we can see in the right chunk of Tab. 3, fine-tuning the decoder on top of both 1-step models improves the performance. However, the model with scale distillation yields significantly better results than the standard SD-SR directly trained for the target magnification. Once again, the impact of scale distillation is more sensible for " + }, + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "type": "text", + "content": " magnification than " + }, + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 371, + 482, + 526 + ], + "type": "text", + "content": ", which highlights the importance of our approach in such difficult settings. Importantly, this fine-tuning strategy is not computationally feasible with diffusion models that require many denoising steps to give a reasonable starting point for the decoder." + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 326, + 188, + 463, + 278 + ], + "blocks": [ + { + "bbox": [ + 326, + 188, + 463, + 278 + ], + "lines": [ + { + "bbox": [ + 326, + 188, + 463, + 278 + ], + "spans": [ + { + "bbox": [ + 326, + 188, + 463, + 278 + ], + "type": "table", + "html": "
DecoderOriginalFine-tuned
Scale distillationXX
FID ↓27.9323.9616.2615.54
LPIPS ↓0.2270.2070.1630.159
PSNR ↑25.9426.2425.7326.30
SSIM ↑0.7110.7140.7130.727
FID ↓102.9266.9041.5428.47
LPIPS ↓0.5410.4030.3050.243
PSNR ↑21.0824.4621.5323.96
SSIM ↑0.5410.6470.5280.632
", + "image_path": "dae694401bb3c342b3af01503134343711c5059f370cfecf849b04fcfa71f032.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 308, + 287, + 482, + 321 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 287, + 482, + 321 + ], + "spans": [ + { + "bbox": [ + 308, + 287, + 482, + 321 + ], + "type": "text", + "content": "Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step." + } + ] + } + ], + "index": 7, + "type": "text" + }, + { + "bbox": [ + 131, + 544, + 220, + 558 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 544, + 220, + 558 + ], + "spans": [ + { + "bbox": [ + 131, + 544, + 220, + 558 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "type": "text", + "content": "In summary, in this paper, we introduced the first fast stable diffusion-based super-resolution method. To achieve this, we introduced scale distillation, an approach that allows us to tackle the SR problem in as little as one step. Having a fast diffusion model allowed us to directly fine-tune the decoder, which we show yields state-of-the-art results, even at high magnification factors and only using a single step. We hope that the proposed distillation approach could be adapted for other inverse imaging problems (e.g. image inpainting), which we believe is an interesting direction for future research." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 308, + 287, + 482, + 321 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 287, + 482, + 321 + ], + "spans": [ + { + "bbox": [ + 308, + 287, + 482, + 321 + ], + "type": "text", + "content": "Table 3: Role of scale distillation and decoder fine-tuning. All results reported here are obtained with 1 inference step." + } + ] + } + ], + "index": 10, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 197, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 197, + 126 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 197, + 126 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 141, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 138, + 141, + 480, + 175 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 141, + 480, + 175 + ], + "spans": [ + { + "bbox": [ + 138, + 141, + 480, + 175 + ], + "type": "text", + "content": "1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 176, + 480, + 208 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 176, + 480, + 208 + ], + "spans": [ + { + "bbox": [ + 138, + 176, + 480, + 208 + ], + "type": "text", + "content": "2. Chen, C., Shi, X., Qin, Y., Li, X., Han, X., Yang, T., Guo, S.: Real-world blind super-resolution via feature matching with implicit high-resolution priors. In: ACM International Conference on Multimedia (2022)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 209, + 480, + 231 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 209, + 480, + 231 + ], + "spans": [ + { + "bbox": [ + 138, + 209, + 480, + 231 + ], + "type": "text", + "content": "3. Chung, H., Ye, J.C., Milanfar, P., Delbracio, M.: Prompt-tuning latent diffusion models for inverse problems. In: arXiv preprint arXiv: 2310.01110 (2023)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 232, + 480, + 253 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 232, + 480, + 253 + ], + "spans": [ + { + "bbox": [ + 138, + 232, + 480, + 253 + ], + "type": "text", + "content": "4. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision (2014)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 255, + 480, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 255, + 480, + 286 + ], + "spans": [ + { + "bbox": [ + 138, + 255, + 480, + 286 + ], + "type": "text", + "content": "5. Fritsche, M., Gu, S., Timofte, R.: Frequency separation for real-world superresolution. In: IEEE International Conference on Computer Vision - Workshops (2019)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 288, + 480, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 288, + 480, + 319 + ], + "spans": [ + { + "bbox": [ + 138, + 288, + 480, + 319 + ], + "type": "text", + "content": "6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances on Neural Information Processing Systems (2014)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 321, + 480, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 321, + 480, + 353 + ], + "spans": [ + { + "bbox": [ + 138, + 321, + 480, + 353 + ], + "type": "text", + "content": "7. Gu, S., Lugmayr, A., Danelljan, M., Fritsche, M., Lamour, J., Timofte, R.: Div8k: Diverse 8k resolution image dataset. In: IEEE International Conference on Computer Vision - Workshops (2019)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 354, + 480, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 354, + 480, + 386 + ], + "spans": [ + { + "bbox": [ + 138, + 354, + 480, + 386 + ], + "type": "text", + "content": "8. He, L., Yan, H., Luo, M., Luo, K., Wang, W., Du, W., Chen, H., Yang, H., Zhang, Y.: Iterative reconstruction based on latent diffusion model for sparse data reconstruction. In: arXiv preprint arXiv:2307.12070 (2023)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 388, + 480, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 388, + 480, + 420 + ], + "spans": [ + { + "bbox": [ + 138, + 388, + 480, + 420 + ], + "type": "text", + "content": "9. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances on Neural Information Processing Systems (2017)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 422, + 480, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 422, + 480, + 453 + ], + "spans": [ + { + "bbox": [ + 138, + 422, + 480, + 453 + ], + "type": "text", + "content": "0. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Gool, L.V.: Dslr-quality photos on mobile devices with deep convolutional networks. In: IEEE International Conference on Computer Vision (2017)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 455, + 480, + 486 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 455, + 480, + 486 + ], + "spans": [ + { + "bbox": [ + 138, + 455, + 480, + 486 + ], + "type": "text", + "content": "1. Ji, X., Cao, Y., Tai, Y., Wang, C., Li, J., Huang, F.: Real-world super-resolution via kernel estimation and noise injection. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2020)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 487, + 480, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 487, + 480, + 520 + ], + "spans": [ + { + "bbox": [ + 138, + 487, + 480, + 520 + ], + "type": "text", + "content": "2. Jolicoeur-Martineau, A., Li, K., Piché-Taillefer, R., Kachman, T., Mitliagkas, I.: Gotta go fast when generating data with score-based models. In: arXiv preprint arXiv:2105.14080 (2021)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 522, + 480, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 522, + 480, + 553 + ], + "spans": [ + { + "bbox": [ + 138, + 522, + 480, + 553 + ], + "type": "text", + "content": "3. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 555, + 480, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 555, + 480, + 576 + ], + "spans": [ + { + "bbox": [ + 138, + 555, + 480, + 576 + ], + "type": "text", + "content": "4. Ke, J., Wang, Q., Wang, Y., Milanfar, P., Yan, F.: Musiq: Multi-scale image quality transformer. In: IEEE International Conference on Computer Vision (2021)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 577, + 480, + 609 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 577, + 480, + 609 + ], + "spans": [ + { + "bbox": [ + 138, + 577, + 480, + 609 + ], + "type": "text", + "content": "5. Liang, J., Zhang, K., Gu, S., Van Gool, L., Timofte, R.: Flow-based kernel prior with application to blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 611, + 480, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 611, + 480, + 642 + ], + "spans": [ + { + "bbox": [ + 138, + 611, + 480, + 642 + ], + "type": "text", + "content": "6. Liang, J., Zeng, H., Zhang, L.: Efficient and degradation-adaptive network for real-world image super-resolution. In: European Conference on Computer Vision (2022)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "type": "text", + "content": "7. Liu, A., Liu, Y., Gu, J., Qiao, Y., Dong, C.: Blind image superresolution: A survey and beyond. In: arXiv preprint arXiv:2107.03055 (2021)" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 133, + 116, + 480, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 480, + 149 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 480, + 149 + ], + "type": "text", + "content": "18. Lu, C., Zhou, Y., Bao, F., Chen, J., LI, C., Zhu, J.: Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In: Advances on Neural Information Processing Systems (2022)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 150, + 480, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 150, + 480, + 183 + ], + "spans": [ + { + "bbox": [ + 132, + 150, + 480, + 183 + ], + "type": "text", + "content": "19. Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., Zhu, J.: Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. In: arxiv prepring arxiv: 2211.01095 (2023)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 183, + 480, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 183, + 480, + 205 + ], + "spans": [ + { + "bbox": [ + 132, + 183, + 480, + 205 + ], + "type": "text", + "content": "20. Maeda, S.: Unpaired image super-resolution using pseudo-supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 205, + 480, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 205, + 480, + 237 + ], + "spans": [ + { + "bbox": [ + 132, + 205, + 480, + 237 + ], + "type": "text", + "content": "21. Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J., Salimans, T.: On distillation of guided diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2023)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 238, + 480, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 238, + 480, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 238, + 480, + 270 + ], + "type": "text", + "content": "22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: IEEE Conference on Computer Vision and Pattern Recognition (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 271, + 480, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 271, + 480, + 303 + ], + "spans": [ + { + "bbox": [ + 132, + 271, + 480, + 303 + ], + "type": "text", + "content": "23. Rout, L., Raoof, N., Daras, G., Caramanis, C., and Sanjay Shakkottai, A.G.D.: Solving linear inverse problems provably via posterior sampling with latent diffusion models. In: NeurIPS (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 304, + 480, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 304, + 480, + 335 + ], + "spans": [ + { + "bbox": [ + 132, + 304, + 480, + 335 + ], + "type": "text", + "content": "24. Sahak, H., Watson, D., Sahara, C., Fleet, D.: Denoising diffusion probabilistic models for robust image super-resolution in the wild. In: arXiv preprint arXiv: 2302.07864 (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 336, + 480, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 336, + 480, + 357 + ], + "spans": [ + { + "bbox": [ + 132, + 336, + 480, + 357 + ], + "type": "text", + "content": "25. Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D.J., Norouzi, M.: Image superresolution via iterative refinement. preprint arXiv: 2104.07636 (2021)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 358, + 480, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 358, + 480, + 380 + ], + "spans": [ + { + "bbox": [ + 132, + 358, + 480, + 380 + ], + "type": "text", + "content": "26. Salimans, T., Ho, J.: Progressive distillation for fast sampling of diffusion models. In: International Conference on Learning Representations (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 380, + 480, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 380, + 480, + 402 + ], + "spans": [ + { + "bbox": [ + 132, + 380, + 480, + 402 + ], + "type": "text", + "content": "27. Shocher, A., Cohen, N., Irani, M.: \"zero-shot\" superresolution using deep internal learning. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 403, + 480, + 424 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 403, + 480, + 424 + ], + "spans": [ + { + "bbox": [ + 132, + 403, + 480, + 424 + ], + "type": "text", + "content": "28. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 425, + 480, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 425, + 480, + 445 + ], + "spans": [ + { + "bbox": [ + 132, + 425, + 480, + 445 + ], + "type": "text", + "content": "29. Song, Y., Dhariwal, P., Chen, M., Sutskever, I.: Consistency models. arXiv preprint arXiv:2303.01469 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 447, + 480, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 447, + 480, + 479 + ], + "spans": [ + { + "bbox": [ + 132, + 447, + 480, + 479 + ], + "type": "text", + "content": "30. Timofte, R., Agustsson, E., Gool, L.V., Yang, M., Zhang, L.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: IEEE Conference on Computer Vision and Pattern Recognition - Workshops (2017)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 479, + 480, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 479, + 480, + 511 + ], + "spans": [ + { + "bbox": [ + 132, + 479, + 480, + 511 + ], + "type": "text", + "content": "31. Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., Wen, F.: Bringing old photos back to life. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 512, + 480, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 512, + 480, + 533 + ], + "spans": [ + { + "bbox": [ + 132, + 512, + 480, + 533 + ], + "type": "text", + "content": "32. Wang, J., Yue, Z., Zhou, S., Chan, K.C., Loy, C.C.: Exploiting diffusion prior for real-world image super-resolution. In: arXiv preprint arXiv:2305.07015 (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 534, + 480, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 534, + 480, + 566 + ], + "spans": [ + { + "bbox": [ + 132, + 534, + 480, + 566 + ], + "type": "text", + "content": "33. Wang, L., Wang, Y., Dong, X., Xu, Q., Yang, J., An, W., Guo, Y.: Unsupervised degradation representation learning for blind superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition (2021)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 567, + 480, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 567, + 480, + 599 + ], + "spans": [ + { + "bbox": [ + 132, + 567, + 480, + 599 + ], + "type": "text", + "content": "34. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In: IEEE International Conference on Computer Vision - Workshops (2021)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 600, + 480, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 600, + 480, + 632 + ], + "spans": [ + { + "bbox": [ + 132, + 600, + 480, + 632 + ], + "type": "text", + "content": "35. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 633, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 633, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 633, + 480, + 665 + ], + "type": "text", + "content": "36. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image superresolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 237, + 101 + ], + "type": "text", + "content": "M. Noroozi et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 358 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "text", + "content": "37. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: ESRGAN: Enhanced super-resolution generative adversarial networks. In: European Conference on Computer Vision - Workshops (2018)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 150, + 482, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 150, + 482, + 182 + ], + "spans": [ + { + "bbox": [ + 130, + 150, + 482, + 182 + ], + "type": "text", + "content": "38. Wei, P., Xie, Z., Lu, H., Zhan, Z., Ye, Q., Zuo, W., Lin, L.: Component divide-and-conquer for real-world image super-resolution. In: European Conference on Computer Vision (2020)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 182, + 482, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 182, + 482, + 215 + ], + "spans": [ + { + "bbox": [ + 130, + 182, + 482, + 215 + ], + "type": "text", + "content": "39. Yan, Y., Liu, C., Chen, C., Sun, X., Jin, L., Peng, X., Zhou, X.: Fine-grained attention and feature-sharing generative adversarial networks for single image superresolution. In: IEEE Transactions on Multimedia (2021)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 215, + 482, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 215, + 482, + 237 + ], + "spans": [ + { + "bbox": [ + 130, + 215, + 482, + 237 + ], + "type": "text", + "content": "40. Yue, Z., Wang, J., Change Loy, C.: Ressift: Efficient diffusion model for image super-resolution by residual shifting. In: NeurIPS (2023)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 237, + 482, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 237, + 482, + 270 + ], + "spans": [ + { + "bbox": [ + 130, + 237, + 482, + 270 + ], + "type": "text", + "content": "41. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. In: IEEE International Conference on Computer Vision (2021)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 270, + 482, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 270, + 482, + 293 + ], + "spans": [ + { + "bbox": [ + 130, + 270, + 482, + 293 + ], + "type": "text", + "content": "42. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: IEEE International Conference on Computer Vision (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 293, + 482, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 293, + 482, + 325 + ], + "spans": [ + { + "bbox": [ + 130, + 293, + 482, + 325 + ], + "type": "text", + "content": "43. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (2018)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 325, + 482, + 358 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 325, + 482, + 358 + ], + "spans": [ + { + "bbox": [ + 130, + 325, + 482, + 358 + ], + "type": "text", + "content": "44. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (2017)" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 397, + 91, + 447, + 100 + ], + "type": "text", + "content": "YONOS-SR" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_content_list.json b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..63e96f057a6237faa14e0370a250abfa2bba0072 --- /dev/null +++ b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_content_list.json @@ -0,0 +1,1956 @@ +[ + { + "type": "text", + "text": "ZeST: Zero-Shot Material Transfer from a Single Image", + "text_level": 1, + "bbox": [ + 300, + 141, + 702, + 186 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Ta-Ying Cheng $^{1,2}$ , Prafull Sharma $^{3}$ , Andrew Markham $^{1}$ , Niki Trigoni $^{1}$ , and Varun Jampani $^{2}$", + "bbox": [ + 295, + 210, + 705, + 244 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1University of Oxford", + "bbox": [ + 305, + 253, + 452, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ Stability AI", + "bbox": [ + 483, + 253, + 573, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{3}$ MIT CSAIL", + "bbox": [ + 604, + 253, + 694, + 268 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/0ccf3b5fbb8ae568fe5a4d284565a4ac400feb4430884775af6f00cd5777436e.jpg", + "image_caption": [ + "Fig. 1: Overview. We present ZeST, a zero-shot single-image approach to (a) transfer material from an exemplar image to an object in the input image. (b) ZeST can easily be extended to perform multiple material edits in an single image, and (c) perform implicit lighting-aware edits on rendering of a textured mesh." + ], + "image_footnote": [], + "bbox": [ + 240, + 303, + 488, + 547 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/13158a1908a34fa45622aa16676426695e899ff5f634eccbada0e3b622f65912.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 303, + 759, + 547 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. We propose ZeST, a method for zero-shot material transfer to an object in the input image given a material exemplar image. ZeST leverages existing diffusion adapters to extract implicit material representation from the exemplar image. This representation is used to transfer the material using pre-trained inpainting diffusion model on the object in the input image using depth estimates as geometry cue and grayscale object shading as illumination cues. The method works on real images without any training resulting a zero-shot approach. Both qualitative and quantitative results on real and synthetic datasets demonstrate that ZeST outputs photorealistic images with transferred materials. We also show the application of ZeST to perform multiple edits and robust material assignment under different illuminations.", + "bbox": [ + 259, + 660, + 740, + 825 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Project Page: https://ttchengab.github.io/zest", + "bbox": [ + 261, + 825, + 573, + 839 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 217, + 143, + 374, + 160 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Editing object materials in images (e.g., changing a marble statue into a steel statue) is useful for several graphics and design applications such as game design, e-commerce, etc. It is a highly challenging and time-consuming task even for expert artists and graphic designers - typically requires explicit 3D geometry and illumination estimation followed by careful tuning of the target material properties (e.g., metallic, roughness, transparency). Previous works try to alleviate the tedious material specification by synthesizing textures given input text prompts [39,50]. However, they are focused on texturing 3D meshes, which overlooks some of the unique challenges for material editing in 2D images, such as illumination. Another work [41] proposes fine-grained material editing on images, but it cannot directly transfer materials from a given exemplar.", + "bbox": [ + 212, + 175, + 785, + 340 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we aim to make 2D-to-2D material editing practical by eliminating the need for any 3D objects as well as explicit specification of material properties. Given a single image of an object and another material exemplar image, our goal is to transfer the material appearance from the exemplar to the target object directly in 2D. See Fig. 1 for some sample input and material exemplar images. We do not assume any access to the ground-truth 3D shapes, illumination, or even the material properties, making this problem setting practical and widely applicable for material editing.", + "bbox": [ + 212, + 340, + 785, + 462 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "This setup is particularly challenging from two perspectives. First, an explicit approach to material transfer requires an understanding of many object-level properties in both the exemplar and the input image, such as geometry and illumination. Subsequently, we have to disentangle the material information from these properties and apply it to the new image; the entire process has several unsolved components. Second, there currently exists no real-world datasets for supervising this task. Collecting high-quality datasets presenting the same object with multiple materials and exemplars may be quite tedious.", + "bbox": [ + 212, + 462, + 785, + 583 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "One of the main contributions of this work in alleviating these challenges is a zero-shot approach that can implicitly transfer arbitrary material appearances from a given 2D exemplar image onto a target 2D object image, without explicitly estimating any 3D or material properties from either image. We call our approach 'ZeST', as it does not require multiple exemplars or any training like previous works, making it easy to generalize to any images in the wild.", + "bbox": [ + 212, + 583, + 785, + 672 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "With ZeST, we propose a carefully designed pipeline that repurposes several recent advances in 2D image generation and editing for our problem setting. At a high level, we adapt the geometry-guided generation (e.g., ControlNet [51]) and also exemplar-guided generation (e.g., IP-Adapter [49]) to implicitly isolate and transfer material appearance from a source exemplar to the target image while applying a foreground decolored image and inpainting for illumination cues. Our key contribution is presenting a simple pipeline with careful design choices that can be used to tackle a highly challenging problem of 2D-to-2D material transfer.", + "bbox": [ + 212, + 672, + 785, + 794 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Since this is a new problem setting, we created both synthetic and real-world evaluation datasets with material exemplars and object images. Extensive qualitative and quantitative evaluations demonstrate that ZeST excels in photo-", + "bbox": [ + 212, + 794, + 785, + 839 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "realism and material accuracy in the output images when compared against various baselines while being completely training-free. See Fig. 1(a) for sample results of ZeST. With our pipeline, artists can grab pre-designed materials as material exemplars and directly transfer them to real-world images. By using different object masks, we can also use ZeST to cast different materials to multiple objects present in a single image (Fig. 1 (b)). In addition, with slight alteration of the inputs, ZeST can perform light-aware material transfer by changing the reflections while keeping textural patterns consistent (Fig. 1 (c)); this method can have potential application when used in conjunction with 3D texture generation methods [10].", + "bbox": [ + 212, + 146, + 787, + 297 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In summary, $ZeST$ has several favorable properties for material editing:", + "bbox": [ + 238, + 297, + 756, + 313 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- Zero-shot, training free, single-image material transfer. By leveraging 2D generative priors, ZeST works in a zero-shot manner without needing dataset finetuning. Unlike some contemporary works [50] that implicitly capture material properties using several material images, ZeST only needs a single material exemplar image to transfer the material in pixel space.", + "bbox": [ + 225, + 321, + 785, + 397 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- No explicit 3D, illumination or materials. With 2D depth and segmentation estimation (which are readily available these days) and implicit material transfer, we eliminate the need for explicit specification of 3D meshes, illumination or material properties (say, in terms of BRDF).", + "bbox": [ + 227, + 397, + 785, + 455 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- Several downstream applications. Given the simplistic and practical nature of our approach, ZeST can be used for several downstream graphics applications such as applying pre-designed materials to real-world images, editing multiple object materials in a single image, and perform lighting-aware material transfer given untextured mesh renderings.", + "bbox": [ + 227, + 457, + 785, + 532 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 215, + 553, + 387, + 569 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Diffusion Models. Denoising Diffusion Probabilistic models have emerged as the state-of-the-art for class-conditional and text-prompt conditioned image generation [18, 23-27, 43]. These models generate photorealistic images with exemplary geometry, materials, illumination, and scene composition. The models have been extended to be conditioned on input images for computational photography tasks such as super-resolution, style transfer, and inpainting.", + "bbox": [ + 212, + 583, + 787, + 672 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Further work demonstrate controllable generation conditioned on text-based instructions [8,20,22,46], semantic segmentation [4], bounding box [11,30,47,48], depth [6,53], sketch [34,51], and image prompt [49]. Prompt-to-prompt and Prompt+ edit the input image by performing inversion followed by the introduction of new terms and reweighting the effect of terms in the input prompt [22,46]. InstructPix2Pix performs edits an input image conditioned on an instruction [7]. Ge et al. proposed rich text based image editing allowing for style assignment and specific description to specific terms in the prompt [20]. While these methods edit the image semantically and high-level descriptions, assigning specific materials using text-based approach is challenging since text acts as a limiting modality for describing textures.", + "bbox": [ + 212, + 674, + 787, + 840 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 114, + 785, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "A collection of reference images can be used to learn concepts which can be further included in text prompts to generate images with the learned concepts [12, 29, 40]. Spatial modalities such as depth and sketches have been used for controlling the generated images [34, 49, 51]. Pre-trained text-to-image models can be leveraged for 3D-aware image editing using language and depth cues [13, 33, 35]. The use of ControlNet has been extended by Bhat et al. to use depth for controlling the scene composition while maintaining other scene attributes [6]. Object orientation, illumination, and other object attributes can be controlled in a continuous manner using ControlNet and learned continuous tokens embedding the 3D properties [13].", + "bbox": [ + 212, + 146, + 787, + 297 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Material acquisition and editing. Material acquisition and editing is an active field of research taking into account illumination and object geometry. Previous work has demonstrated material acquisition under known illumination conditions and camera [2,3,17]. Such acquisition in the wild requires localizing objects with similar materials, which has been facilitated by supervised material segmentation and leveraging pre-trained vision representation backbones [5,31,42,45]. Khan et al. introduced in-image material editing using estimates of depth [28]. Recent works have employed generative adversarial networks [21] for perceptual material editing [16, 44] and physical shader-based editing using text-to-image models [41]. The use of generative models has been extended to explicitly learning materials [32] and texturing 3D meshes [9, 10, 39, 50].", + "bbox": [ + 212, + 297, + 787, + 464 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In our work, we aim to use pre-trained image generation diffusion models to perform exemplar-based material transfer from a single image. We aim to use ControlNet and IP-adapter to perform material transfer in a zero-shot way without any training.", + "bbox": [ + 212, + 464, + 787, + 525 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 215, + 547, + 330, + 563 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this section, we describe our method ZeST that performs exemplar-based material transfer. Recent methods perform the related problem of texture synthesis on meshes [39,50] by finetuning a diffusion model on 3-5 material exemplar images to capture the texture/material in the latent space. On the contrary, ZeST only requires a single material exemplar image and a single input image, accomplishing material transfer in a zero-shot, training-free manner.", + "bbox": [ + 212, + 579, + 787, + 672 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Problem Setting", + "text_level": 1, + "bbox": [ + 215, + 693, + 398, + 709 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given a material exemplar image $M$ and an input image $I$ , we aim to output an edited image $I_{gen}$ from $I$ by transferring the material from the material exemplar to the object in the input image while preserving other object and scene properties (e.g. object geometry, background, lighting etc.). Performing this task requires understanding the material, geometry, and illumination from both the exemplar and the input image.", + "bbox": [ + 212, + 719, + 787, + 809 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In practice, estimating all the aforementioned object-level properties and further isolating material information explicitly from $M$ is challenging since these", + "bbox": [ + 212, + 809, + 787, + 839 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/c9ffb1be6ad5f4a0031561bda8ba79984da35379db8ace660683b4fa68fc9eaa.jpg", + "image_caption": [ + "Fig. 2: ZeST Architecture. Given a material exemplar $M$ and an input image $I$ , we first encode material exemplar with an image encoder (e.g., IP-Adaptor). Concurrently, we convert the input image into a depth map $D_I$ and a foreground-grayscale image $I_{init}$ to feed into the geometry and latent illumination guidance branch, respectively. By combining the two sources of guidance with the latent features from the material encoding, ZeST can transfer the material properties onto the object in input image while preserving all other attributes." + ], + "image_footnote": [], + "bbox": [ + 220, + 146, + 787, + 327 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "properties are entangled in the pixel space. Therefore, we propose to tackle this problem in the latent space of diffusion models. Specifically, we aim to extract a latent representation $z_{M}$ containing the material and texture information that we can then inject into a generative diffusion model $S$ to generate $I_{gen}$ .", + "bbox": [ + 212, + 468, + 787, + 531 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 ZeST Overview", + "text_level": 1, + "bbox": [ + 215, + 553, + 392, + 568 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Since there exists no synthetic/real image dataset to supervise the learning of a 2D-to-2D material transfer, we perform the material transfer in a zero-shot training-free manner. We first break down this complex task into sub-problems of (1) encoding the material exemplar, (2) geometry-guided image editing, and (3) making the generation process illumination-aware. Given the recent advances in high-fidelity diffusion models and complementary adapters for image generation, we leverage existing pre-trained modules to tackle each of the sub-problems that together compose our pipeline to perform image-prompted material editing.", + "bbox": [ + 212, + 582, + 787, + 703 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Figure 2 presents an overview of our pipeline, which comprises three branches to guide the material, geometry, and lighting information, respectively. The Material Encoding branch takes the material exemplar image $M$ as input, which is processed by the image encoder to obtain a material latent representation $z_{M}$ .", + "bbox": [ + 212, + 704, + 787, + 763 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Concurrently, we feed the input image $I$ into Geometry Guidance and Latent Illumination Guidance Branch. The Geometry Guidance branch computes the depth map $D_I$ for the image $I$ , which is used as the input to ControlNet. The Latent Illumination Guidance branch computes a foreground mask $F$ using $I$ and creates a foreground-grayscale image $I_{init}$ , which we use as input to the", + "bbox": [ + 212, + 763, + 787, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 114, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/dc6935d2a561cbfe10c6dad61839dae2bac28938bf7d8f47760b768d9c3628a7.jpg", + "image_caption": [ + "Material Exemplar" + ], + "image_footnote": [], + "bbox": [ + 217, + 154, + 321, + 234 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/1b944240321436b305e779b9ec8c5e4d64a1e884bc7216c713f967edd8fd8585.jpg", + "image_caption": [ + "Input Image" + ], + "image_footnote": [], + "bbox": [ + 325, + 145, + 436, + 234 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/6bd85701878cebfeba320524d9e9e75a29c3f0d34527ae6f86e5039f98770de3.jpg", + "image_caption": [ + "Estimated Depth (Optional)", + "Fig. 3: The design choice of IP-Adaptor with ControlNet. Given the material exemplar and the input image, we dive into the different choices of utilizing the IP-Adaptor. In particular we realize that an $\\mathrm{Img2Img + }$ text module (a) wouldn't properly transfer the materials properly to the main object. On the other hand, ControlNet (b) will preserve the geometry information of the given input. We thus utilize this as the starting point for geometry guidance to further explore the best illumination cues." + ], + "image_footnote": [], + "bbox": [ + 439, + 154, + 545, + 234 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/22b4268e020d42b8c2fbee4e6eb6af9622fe89df468937ea572597296f609790.jpg", + "image_caption": [ + "IP-Adaptor Combinations", + "(a) $\\mathrm{Img2Img + Text}$" + ], + "image_footnote": [], + "bbox": [ + 570, + 155, + 674, + 234 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/3607dc967fec1149fcf655e858161e3f7399f396cd0b21c31fdf4f0527739638.jpg", + "image_caption": [ + "(b) ControlNet Model" + ], + "image_footnote": [], + "bbox": [ + 679, + 155, + 785, + 234 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Diffusion Inpainting pipeline. We concatenate the embeddings from ControlNet with the inpainting diffusion model at the corresponding and inject the material embedding $z_{M}$ through the cross-attention. The output of the inpainting diffusion model, $I_{gen}$ , with the edited image containing the object in $I$ cast with material from exemplar image $M$ .", + "bbox": [ + 212, + 372, + 784, + 446 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our design choices to facilitate computation of material embedding, geometry guidance, and illumination cues are discussed in the following sections.", + "bbox": [ + 212, + 448, + 784, + 478 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3 Encoding Material Exemplar", + "text_level": 1, + "bbox": [ + 214, + 500, + 501, + 513 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Given the material exemplar image $M$ , this branch encodes the image into a latent representation while preserving its material properties. Previous works [39, 50] address this by finetuning a text-to-image diffusion model to encode the image into a rare token, implicitly treating the rare token as a latent representation that can be used in conjunction with other texts for image generation. However, this approach of optimizing for the material token requires the time-consuming step for every new material exemplar and usually requires 3-5 images to prevent overfitting.", + "bbox": [ + 212, + 522, + 787, + 643 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We draw inspiration from the recently introduced IP-Adapter [49]. The IP adapter uses a CLIP image encoder to extract image features that can be injected into a diffusion model via the cross-attention layers. These features can be used as an additional condition to guide text prompts or other mediums for the generation. For example, one can input an image of a person and then describe \"on the mountain\" with text to obtain an image of the person in the mountains.", + "bbox": [ + 212, + 643, + 787, + 734 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "However, we realize that IP-Adaptor does not work well when combined with an Img2Img pipeline, as shown in Figure 3 (a) for our task. Moreover, adding text guidances like \"changing the apple texture to golden bowl\" does not produce photorealistic output and does not preserve other scene information (i.e. background). This problem of geometry and material entanglement within material embedding $z_{M}$ remains unsolved, thus motivating the need for geometry and illumination guidance.", + "bbox": [ + 212, + 734, + 787, + 839 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.4 Geometry Guidance via Depth Estimation", + "text_level": 1, + "bbox": [ + 215, + 146, + 609, + 161 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Since decoupling geometry and material properties in images is challenging and requires additional training data, we provide an alternative solution where we enforce a stronger geometry prior to the diffusion model to overwrite the structural information present in $z_{M}$ . To this end, we adopt a depth-based ControlNet to provide geometry guidance from the input image $I$ . We observe that the geometry information from the depth map $D_{I}$ overwrites the geometry information encoded in the $z_{M}$ (see Figure 3 (b)). Note that with the geometry enforced by using depth-based ControlNet, we can successfully transfer the golden material of the bowl to the apple.", + "bbox": [ + 212, + 169, + 787, + 305 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "While the use of ControlNet with IP-Adaptor is introduced in the original IP-Adaptor paper [49], we employ it for a different purpose contrary to applying new structural control over an object in the image (e.g., changing a person's pose). After extensively comparing various components for encoding the material exemplar and input image (analysis in Section 4.2), we find the depth-based guidance from pre-trained ControlNet helps us preserve the original geometry of the object for the task of material transfer.", + "bbox": [ + 212, + 305, + 787, + 410 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "While the addition of ControlNet helps preserve the geometry, we observe that the results suffer from inconsistency in preserving the illumination and background from the input image. This is evident in Figure 3, where the background and the lighting changes differ from the input.", + "bbox": [ + 212, + 411, + 787, + 472 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "3.5 Latent-space Illumination Guidance", + "text_level": 1, + "bbox": [ + 214, + 493, + 555, + 508 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Our final branch is primarily responsible for preserving the illumination and background in the input image. We propose two-fold guidance for illumination in the latent space during generation - an inpainting module and a foreground decoloring process. In addition to the attached IP-Adaptor and ControlNet, we adopt an inpainting diffusion model $S$ instead of a standard generator. Specifically, our ControlNet-inpainting procedure takes in four conditions for image generation:", + "bbox": [ + 212, + 516, + 787, + 619 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\nI _ {g e n} = \\mathcal {S} \\left(z _ {M}, D _ {I}, I _ {\\text {i n i t}}, F\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 406, + 623, + 784, + 638 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $z_{M}$ is the material encoding, $D_{I}$ is the depth map computed for input image $I$ , $I_{init}$ is the initial image to denoise from, and $F$ is the foreground mask of target object in $I$ which we are editing.", + "bbox": [ + 212, + 643, + 787, + 688 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conduct an ablation on the various versions of $I_{init}$ , as shown in Figure 4. Specifically, we test out the following settings: (1) using the original input image, (2) initializing the foreground with random noise, and (3) using the foreground grayscale image. Intuitively, directly letting $I_{init} = I$ (Setting (1)) would be a preferable option as $I$ encompasses implicit lighting information (from the object's shading and the surrounding environment) while conveniently enforces all other parts of the image other than the object to remain the same. In practice, however, we found that using the original image inevitably introduces a strong prior of the base color from the input object (e.g. orange color of pumpkin), which would be entangled with the material base color from $M$ in the output", + "bbox": [ + 212, + 688, + 787, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/dfb0ba38f010079d23d3006d67be07164813a0db110694a17879009e1741743a.jpg", + "image_caption": [ + "Fig. 4: Ablating input for illumination guidance. To validate our design choice of the foreground-grayscale image for initializing inpainting, we compare the generated results against using the original image and random noise as inputs. The original image presents a strong base color prior that perturbs the generation, while the random image neglects shading information, leading to wrong lighting in both examples." + ], + "image_footnote": [], + "bbox": [ + 217, + 142, + 491, + 258 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/bd3ab1897a25bc22014e0737d3919a802c5920777df6996f1bfca69166cddb85.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 513, + 142, + 787, + 258 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "image. This artifact is sustained even when we significantly extend the number of denoising steps. On the other hand, when initializing $I_{init}$ with random noise, the method indeed removes the base color prior but also removes the shading information causing incorrect illuminations in the synthesized object (e.g., the left side of the synthesized pumpkin is darker, but light is coming from the left). In our proposed pipeline, we perform grayscale operations in the pixel space for the object region (3). This provides a balanced solution of removing the strong color priors from the input image while keeping the shading cues for the inpainting diffusion model.", + "bbox": [ + 212, + 366, + 787, + 502 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Thus, we propose to initialize $I_{init}$ as:", + "bbox": [ + 240, + 503, + 517, + 518 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\nI _ {\\text {i n i t}} = F \\odot I _ {\\text {g r a y}} + (1 - F) \\odot I, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 383, + 527, + 784, + 546 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "which converts the color of foreground object in the image to grayscale. $(1 - F)\\odot I$ implicitly preserves the lighting direction, intensity, and color information, and $F\\odot I_{gray}$ preserves the object's shading information without base color prior.", + "bbox": [ + 214, + 554, + 787, + 602 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "3.6 Implementation Details", + "text_level": 1, + "bbox": [ + 214, + 619, + 455, + 636 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We implement our method using Stable Diffusion XL Inpainting [36] with the corresponding version of depth-based ControlNet [51] and IP-Adaptor [49]. We use Dense Prediction Transformers for depth estimation [38] and $\\mathrm{Rembg}^1$ for foreground extraction. Our method is implemented in PyTorch and runs on a single Nvidia A-10 GPU with 24 GB of RAM. For all Dreambooth approaches, we use the official LoRA-Dreambooth provided by Diffusers.", + "bbox": [ + 212, + 643, + 787, + 736 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 215, + 757, + 375, + 773 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We evaluate the efficacy of our method against various baselines. We also present several examples of downstream applications using our method.", + "bbox": [ + 212, + 786, + 785, + 818 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "1 https://github.com/danielgatis/rembg", + "bbox": [ + 217, + 823, + 514, + 840 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/6178885b58e0aebe464438f43e9c3250ffa0e9a3880bac37227b27afca8c6b0c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 143, + 356, + 215 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/2d0fc42514cc3fbc8bd7bf0be413ae7c1d2c538cc0318303c56a676653ac0d22.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 361, + 143, + 498, + 215 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/76b79bb3a122c5a49ecc392b43b15a82d113f1b4cccbc1c3be77833fafad970b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 143, + 642, + 215 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/1e8c4e41f24bd08f92dc4069b7f70da0ae9e1c95f725a6e98bea54d1d91cd1b7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 651, + 143, + 785, + 215 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/2e143f320104a5c8a57d4e2dc3ed1e482e8eb5da770c0cda4f4268012aea2ffa.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 217, + 354, + 282 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/0445dbb71d03599314859e6f8a6c286195d1164eca21031cd037659f19aa8afe.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 361, + 217, + 496, + 282 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/6f3c47923f0b4dafc07d9fc88a650e75ea78daeedf83e51d43c259a325a22dc0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 217, + 640, + 282 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/93803b5b6995bc32b9adab9fbb113e7a290073a76843ee23e5dba0eb6786fe92.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 651, + 217, + 785, + 282 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/186649ce998e725587dfee773e43632daebb768cd49311e183e748da0cca013f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 284, + 354, + 349 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/531ebe7a8d83b8995883edd1b18b40520c6628c0171f9169a097658adbc2bf17.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 361, + 284, + 496, + 349 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/0dbe4f708f07dfe47e469f0d00252c9e37ba9cd3a46eab4ca69611c995eae68c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 284, + 640, + 349 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/b6a8f5b397dcf37daceefe3534854ae8048b8a4051b13ed7c04b52130db20368.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 651, + 284, + 785, + 349 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/8c32fd0fca2c10a4c927731af791fb25aa213d84db7559f2b05df532f66af22b.jpg", + "image_caption": [ + "Fig. 5: Qualitative results on diverse materials. We present results of material transfer from a diverse set of material exemplar images. Even when perturbed by lighting and complex geometry, ZeST can still isolate the material information from the exemplar image and transfer to various objects while preserving the original geometry and illumination conditions. Note the change in specular regions as shinier materials are chosen in the case of the car made of brass and the dinosaur made of shiny steel." + ], + "image_footnote": [], + "bbox": [ + 218, + 351, + 354, + 416 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/6df8986d7a2c797643fc9f431d4ea05abe77b8551f0173a5957fbd6cafa9aabf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 361, + 351, + 496, + 416 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/7fd1b6458b6c16d311d0465d6b99f3421dcf8389233a81fa292fa46e79f937e3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 351, + 642, + 416 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/5ef8b5d0b27fa8b2b4474859f925f23eedf603952baab612f5de50bff3fc532e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 651, + 351, + 785, + 416 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Datasets", + "text_level": 1, + "bbox": [ + 215, + 546, + 333, + 559 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "As the first to propose this problem, we create two datasets for comparison and evaluation. The real-world datasets provide us an understanding of our model's robustness, while the synthetic dataset is used for standard quantitative metrics.", + "bbox": [ + 212, + 578, + 785, + 625 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Real-World Dataset. We curate a dataset comprising of 30 diverse material exemplars and 30 input images, collected from copyright-free image sources (i.e. Unsplash) and images generated by DALLE-3. All of these images are object-centric, where there exists a main object in the foreground to which we are extracting the material from or applying the material onto.", + "bbox": [ + 212, + 625, + 785, + 700 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Synthetic Dataset. To perform quantitative evaluation, we use Blender to create a synthesized dataset of 9 materials randomly initialized by adjusting the base color, metallic, and roughness, and 20 meshes of different categories from Objaverse [15] rendered at three random viewpoints each, generating 540 ground-truth renderings. We render spheres assigned with each material individually and use the rendered image the material exemplar and pre-textured mesh rendering as input for all methods.", + "bbox": [ + 212, + 702, + 785, + 808 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "While $ZeST$ is completely training-free, other methods of learning materials (e.g., Dreambooth) require further fine-tuning for every exemplar given. This", + "bbox": [ + 214, + 809, + 785, + 840 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/866749292d78fd8febbf034a984acceac97a6c8424f47742c294fa09e6cadf35.jpg", + "image_caption": [ + "Fig. 6: Qualitative comparisons against baselines. Given the material exemplar and input image in the first column, we compare our method to five different baselines. Without any geometry guidance, all image editing baselines fail to impose the correct geometry of the input image. On the other hand, using Dreambooth with our geometry and illumination guidance often contains albedo shifts, potentially due to information loss when encoding material properties into a word token." + ], + "image_footnote": [], + "bbox": [ + 215, + 143, + 785, + 412 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "makes it infeasible to scale up the two datasets. Both our datasets are of comparable sizes to previous works on finetuning diffusion models [40, 50].", + "bbox": [ + 212, + 536, + 785, + 568 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.2 Qualitative Results", + "text_level": 1, + "bbox": [ + 215, + 589, + 419, + 603 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Material transfer results on real images. To demonstrate the application of ZeST on a wide range of materials and objects, we present examples of material transfer in Figure 5. The first three rows present results on real-world images, while the fourth row shows results using PBR materials [1]. Based on the examples, we observe that the material is properly disentangled from the geometry in the material exemplar and follows the shape of the object in the input image. This is particularly evident in the results of the orange, frog, and Groot toy figure, where the material is completely flat. We also notice accurate shadings in the bust and table examples when comparing them against their inputs. In the car and toy dinosaur examples, the reflections from the exemplars are isolated from the textural patterns and cast reasonably based on the illumination cues.", + "bbox": [ + 212, + 613, + 787, + 777 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Qualitative comparisons. Since our work is the first to perform material transfer in latent space, we modified existing methods to compare against. Specifically, since existing image-guided texture synthesis methods utilize Dreambooth for their first step to encode the textures from images into word tokens [14,39,50],", + "bbox": [ + 212, + 779, + 787, + 840 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "we set Dreambooth as the backbone for learning material properties and combine with text-guided image editing techniques for comparison, including MasaCtrl and Instruct-Pix2Pix, and using ZeST but swapping out the IP-Adaptor with text. While our method is training-free, Dreambooth requires finetuning for every material exemplar given. We also explore alternative options to combine with IP-Adaptor, including text-guided inpainting and Instruct-Pix2Pix with the prompt \"Change the texture of the object\".", + "bbox": [ + 212, + 146, + 782, + 252 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "We present qualitative comparisons against the baselines on four exemplar and input images in Figure 6. By using Inpainting with Text prompt instead of ControlNet, the model ignores the geometry of the original input when casting the materials. In both cases when using Instruct-Pix2Pix (with IP-Adaptor or Dreambooth), the geometry of all objects is better preserved, but the model fails to capture the material property from the material exemplar image. The combination of Dreambooth and MasaCtrl fails to preserve the geometry of the object in the input image and misattributes the material. The closest baseline to ours is Dreambooth with our proposed geometry and illumination guidance; however, we observe that the word encoding process results in some information loss as evident in the color shifts of the backpack and the astronaut figure. Furthermore, the method requires additional training for every material exemplar, whereas ZeST takes roughly 15 seconds to generate the image.", + "bbox": [ + 212, + 252, + 784, + 448 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Our method, ZeST, performs the task effectively by retaining the object geometry, scene illumination, and attributing the material correctly. Additionally, note that ZeST adapts to more challenging material exemplar images, such as transparent materials (glass cup in Figure 6 Row 3) and images with other minor objects (additional hand in Figure 6 Row 4).", + "bbox": [ + 212, + 449, + 782, + 525 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.3 Quantitative Comparisons", + "text_level": 1, + "bbox": [ + 214, + 545, + 478, + 560 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "We follow previous work [41, 50] and use the synthetic images to compare all methods in terms of PSNR, LPIPS [52], and CLIP similarity score [37] against ground truth renderings. We also incorporate another DreamSim [19], a more recent metric that is more similar to human references. We grab IP-Adaptor + Instruct-Pix2Pix and Dreambooth + our geometry and illumination guidance as baselines, as they are the strongest (and only) performers from our qualitative comparisons that can roughly edit the material based on the geometry.", + "bbox": [ + 212, + 568, + 782, + 672 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 1 (left) presents our results. We see a dramatic improvement when shifting from the instruct-pix2pix pipeline to our geometry and illumination guidance. While using Dreambooth performs similarly to our IP-Adaptor in the synthetic dataset, it requires a fine-tuned model for each material exemplar, making it unfeasible to scale up. In addition, we show in the next section that our method excels in real-world datasets.", + "bbox": [ + 212, + 672, + 782, + 763 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "- **User Study.** We also create a user study with 16 participants to understand the capability of our model given real-world materials tested on real images. Each subject is shown 5 random samples from the 900 combinations generated from the dataset with our method and against the two strongest baselines: Dreambooth + ControlNet-Inpainting and IP-Adaptor + Instruct-Pix2Pix. We ask", + "bbox": [ + 212, + 763, + 782, + 839 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/dd6b14bc40f3355080ce6f1408f2f2afd9a06cbb0270e0b191549ea64431d764.jpg", + "table_caption": [ + "Table 1: Quantitative Comparisons and User Study. We grab the strongest baselines in our qualitative comparisons for additional studies. Left: We measure the PSNR, LPIPS [52], CLIP similarity score [37], and DreamSim [19] in a quantitative study on the synthetic dataset of 540 exemplar-input combinations. Right: We perform a user study to evaluate the material fidelity and photorealism of the edited images from each method. We randomly sample 5 out of 900 real-world exemplar-input combinations for each of the 16 participants." + ], + "table_footnote": [], + "table_body": "
PSNR↑LPIPS↓CLIP↑DreamSim↓Fidelity↑Photorealism↑
IP-Adaptor + Instruct-Pix2Pix17.080.0990.7400.390IP-Adaptor + Instruct-Pix2Pix1.48
DB + Our Geo/illum. Guidance25.520.0580.8740.238DB + Our Geo/illum. Guidance3.25
Ours25.590.0530.8830.198Ours4.05
", + "bbox": [ + 218, + 253, + 784, + 303 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/03d604531205a5909388ad7508cf0ff37bac1bb7093e4759877b8ae88b282597.jpg", + "image_caption": [ + "Fig. 7: Robustness to lighting and object pose. We present two types of robustness testing. (a): Robustness to changing the material exemplar lighting and pose. (b): Zooming into the material exemplar. Our model yields highly similar results in both, showing the capability to adapt to these external changes." + ], + "image_footnote": [], + "bbox": [ + 217, + 316, + 491, + 422 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/cb49cb9c9605a9a92ac337bb38c19cc241d7c49656a3731416fda08b5411ae9a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 524, + 318, + 785, + 422 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "each subject to rate each image from 1 to 5 based on (1) material fidelity: how close the material in the generated image is compared to the original exemplar and (2) photorealism: how realistic the generated image is. Our results are summarized in Table 1 (right).", + "bbox": [ + 212, + 521, + 784, + 580 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Our results show significant improvements from the two baselines in both material fidelity and photorealism of the edited image. The score improvements are also greater in real-world scenarios compared to synthetic ones. This could be the result of information loss during finetuning and overfitting to the exemplar background, which is less significant under controlled synthetic scenarios.", + "bbox": [ + 212, + 582, + 787, + 657 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "4.4 Robustness of the Model", + "text_level": 1, + "bbox": [ + 215, + 679, + 468, + 694 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "In addition to the diverse set of results presented in Figure 5, we extensively test out the behavior of ZeST with special cases of material exemplar images.", + "bbox": [ + 212, + 704, + 784, + 733 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Relighting and rotating the object in the material exemplar image. A good material extractor should be agnostic to small lighting and rotation changes of the same object used as the material exemplar. To evaluate this, we render a random material and cast it onto an irregular-shaped pumpkin (another example is in the Appendix). We then render three samples of the pumpkin, a default lighting orientation, a change in lighting direction pitch by 120 degrees, and a random rotation, as shown in 7 (a). The transferred materials onto the dolphin", + "bbox": [ + 212, + 734, + 787, + 840 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/f39176c7e08a26ed040d85404e355880ce6e9ad7c4f78e0c132486ae8f94f358.jpg", + "image_caption": [ + "Fig. 8: Multiple Material Transfers in a Single Image. By replacing the foreground extraction with an open-vocabulary segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be applied iteratively to cast different material properties to different objects in a single RGB image." + ], + "image_footnote": [], + "bbox": [ + 215, + 143, + 496, + 268 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/3480b3bf84ce0fab480ff7cf033fb974d797600a4e79a1f3d9370c347579539d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 143, + 787, + 268 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/ac3d6242c2d56fd8cff2a791c5dd6db9e50c30964736d667ad2e566cc244c101.jpg", + "image_caption": [ + "Fig.9: Lighting-aware Image Editing. Given a rendering of a textured mesh, we can alter $ZeST$ slightly to achieve lighting-aware material edit. It can be seen from both examples where the reflection can be disentangled from the object texture." + ], + "image_footnote": [], + "bbox": [ + 215, + 349, + 784, + 435 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "remain roughly consistent across all samples, showing that our method is fairly resistant to these changes at a small scale.", + "bbox": [ + 212, + 521, + 784, + 550 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Effect of image scale of material exemplar image. To examine the effect of the scale of the material exemplar, we first use an image of a woolen cloth material with a distinctive repeating pattern and apply our method to an image of a chair. Then, we zoom into the exemplar image manually to the edge only very few repeated patterns are left. Our results in Figure 7 (b) show that while the scale of the material is drastically different, the model automatically re-adjusts the patterns into a reasonable size to be cast onto the input image.", + "bbox": [ + 212, + 551, + 787, + 657 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4.5 Applications", + "text_level": 1, + "bbox": [ + 215, + 679, + 366, + 694 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Applying multiple materials to multiple objects. By replacing the foreground extraction with a segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be used to iteratively change multiple materials in a single image. Figure 8 presents two examples of editing multiple objects in a single image. As evident in the transparent glass chair where the wooden table behind is roughly visible, ZeST generalizes to complex scenes with multiple objects.", + "bbox": [ + 212, + 704, + 787, + 794 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Lighting-aware Material Transfer. Given a material exemplar image and an untextured mesh rendered under multiple illumination conditions, $ZeST$ can also perform lighting-aware material transfer. Specifically, we first generate the", + "bbox": [ + 212, + 795, + 787, + 840 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/2bf2c1a730e8725f327018ba985c9482e8a8ecc413c023c737abf0387ae84527.jpg", + "image_caption": [ + "Fig. 10: Limitations. Our method primarily fails in two modes. (a) The model sometimes picks the most \"probable\" areas to transfer the material, instead of casting the material on the entire object. (b) If two textures are present in the exemplar image (e.g., foreground and background of the tennis ball, the glazed top and bottom logo of the cup), the model sometimes combine both materials when performing the edit." + ], + "image_footnote": [], + "bbox": [ + 217, + 143, + 491, + 228 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/853490493141d119b79cb7ae57133a2f1ebedb10f4b4f16f3f15045c2665a7b8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 511, + 143, + 787, + 228 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "materials and textures of the image under Lighting 1 using ZeST. Then, by fixing the same seed during generation and using the generating image given the first lighting as the input to the second, we can enforce consistency in the material and texture generated (details of implementation in Appendix) while changing the reflections. We show examples of transferring the glazed cup material to two mesh renders in Figure 9. ZeST successfully disentangles the reflections while keeping most textural patterns consistent between the two images. This technique could potentially be applied jointly with other 3D texture synthesis works [10] and be helpful to applications such as e-commerce design.", + "bbox": [ + 212, + 335, + 787, + 473 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "4.6 Limitations", + "text_level": 1, + "bbox": [ + 215, + 493, + 356, + 507 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Since $ZeST$ operates majorly in the latent space, the model sometimes exhibits uncontrollable behaviors based on its image understanding. Figure 10 presents two forms of more frequent failure cases: (a) Partial material transfer: the material is only transferred to parts instead of the entirety of the object. We hypothesize that the failure stems from the entanglement of material properties and the exemplar's identity, as the material is only applied to where it seems the most probable (e.g., only apply the jacket material to the statue's body). (b) Blending multiple materials: since the current IP-Adaptor does not have a module to extract regions of an image for material transfer, $ZeST$ sometimes mixes up multiple materials in the exemplar image during transfer.", + "bbox": [ + 212, + 516, + 787, + 667 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 215, + 689, + 359, + 705 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "We present ZeST, a zero-shot, training-free method for exemplar-based material-editing. ZeST is built completely using readily available pre-trained models and demonstrates generalizable and robust results on real images. We curate synthetic and real image datasets to evaluate the performance of our approach. We also demonstrate downstream applications like multiple edits in a single image and material-aware relighting. ZeST serves as a strong starting point for future research in image-to-image material transfer, implying opportunities of leveraging pre-trained image diffusion models for complex graphic designing tasks.", + "bbox": [ + 212, + 719, + 787, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 143, + 321, + 159 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. https://wwwtexts.com/browse/pbr-materials/114558", + "2. Aittala, M., Weyrich, T., Lehtinen, J.: Practical svbrdf capture in the frequency domain. ACM Trans. Graph. 32(4), 110-1 (2013)", + "3. Aittala, M., Weyrich, T., Lehtinen, J., et al.: Two-shot svbrdf capture for stationary materials. ACM Trans. Graph. 34(4), 110-1 (2015)", + "4. Bar-Tal, O., Yariv, L., Lipman, Y., Dekel, T.: Multidiffusion: Fusing diffusion paths for controlled image generation (2023)", + "5. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3479-3487 (2015)", + "6. Bhat, S.F., Mitra, N.J., Wonka, P.: Loosecontrol: Lifting controlnet for generalized depth conditioning. arXiv preprint arXiv:2312.03079 (2023)", + "7. Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: Learning to follow image editing instructions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18392-18402 (2023)", + "8. Cao, M., Wang, X., Qi, Z., Shan, Y., Qie, X., Zheng, Y.: Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. arXiv preprint arXiv:2304.08465 (2023)", + "9. Cao, T., Kreis, K., Fidler, S., Sharp, N., Yin, K.: Texfusion: Synthesizing 3d textures with text-guided image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4169-4181 (2023)", + "0. Chen, D.Z., Siddiqui, Y., Lee, H.Y., Tulyakov, S., Nießner, M.: Text2tex: Text-driven texture synthesis via diffusion models. arXiv preprint arXiv:2303.11396 (2023)", + "1. Chen, M., Laina, I., Vedaldi, A.: Training-free layout control with cross-attention guidance. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 5343-5353 (2024)", + "2. Chen, W., Hu, H., Li, Y., Ruiz, N., Jia, X., Chang, M.W., Cohen, W.W.: Subject-driven text-to-image generation via apprenticeship learning. Advances in Neural Information Processing Systems 36 (2024)", + "3. Cheng, T.Y., Gadelha, M., Groueix, T., Fisher, M., Mech, R., Markham, A., Trigoni, N.: Learning continuous 3d words for text-to-image generation. arXiv preprint arXiv:2402.08654 (2024)", + "4. Corneanu, C., Gadde, R., Martinez, A.M.: Latentpaint: Image inpainting in latent space with diffusion models. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 4334-4343 (2024)", + "5. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A universe of annotated 3d objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13142-13153 (2023)", + "6. Delanoy, J., Lagunas, M., Condor, J., Gutierrez, D., Masia, B.: A generative framework for image-based editing of material appearance using perceptual attributes. In: Computer Graphics Forum. vol. 41, pp. 453-464. Wiley Online Library (2022)", + "7. Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., Bousseau, A.: Flexible svbrdf capture with a multi-image deep network. In: Computer graphics forum. vol. 38, pp. 1-13. Wiley Online Library (2019)", + "8. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021)" + ], + "bbox": [ + 225, + 176, + 785, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 785, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "19. Fu*, S., Tamir*, N., Sundaram*, S., Chai, L., Zhang, R., Dekel, T., Isola, P.: Dreamsim: Learning new dimensions of human visual similarity using synthetic data. NeurIPS (2023)", + "20. Ge, S., Park, T., Zhu, J.Y., Huang, J.B.: Expressive text-to-image generation with rich text. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7545-7556 (2023)", + "21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139-144 (2020)", + "22. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022)", + "23. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)", + "24. Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M., Salimans, T.: Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research 23(1), 2249-2281 (2022)", + "25. Ho, J., Salimans, T.: Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022)", + "26. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10124-10134 (2023)", + "27. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022)", + "28. Khan, E.A., Reinhard, E., Fleming, R.W., Bülthoff, H.H.: Image-based material editing. ACM Transactions on Graphics (TOG) 25(3), 654-663 (2006)", + "29. Kumari, N., Zhang, B., Zhang, R., Shechtman, E., Zhu, J.Y.: Multi-concept customization of text-to-image diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1931-1941 (2023)", + "30. Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., Li, C., Lee, Y.J.: Gligen: Open-set grounded text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22511-22521 (2023)", + "31. Liang, Y., Wakaki, R., Nobuhara, S., Nishino, K.: Multimodal material segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 19800-19808 (2022)", + "32. Lopes, I., Pizzati, F., de Charette, R.: Material palette: Extraction of materials from a single image. arXiv preprint arXiv:2311.17060 (2023)", + "33. Michel, O., Bhattad, A., VanderBilt, E., Krishna, R., Kembhavi, A., Gupta, T.: Object 3dit: Language-guided 3d-aware image editing. Advances in Neural Information Processing Systems 36 (2024)", + "34. Mou, C., Wang, X., Xie, L., Zhang, J., Qi, Z., Shan, Y., Qie, X.: T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453 (2023)", + "35. Pandey, K., Guerrero, P., Gadelha, M., Hold-Geoffroy, Y., Singh, K., Mitra, N.: Diffusion handles: Enabling 3d edits for diffusion models by lifting activations to 3d. arXiv preprint arXiv:2312.02190 (2023)", + "36. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)" + ], + "bbox": [ + 215, + 146, + 785, + 839 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "37. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021)", + "38. Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12179-12188 (2021)", + "39. Richardson, E., Metzer, G., Alaluf, Y., Giryes, R., Cohen-Or, D.: Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721 (2023)", + "40. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242 (2022)", + "41. Sharma, P., Jampani, V., Li, Y., Jia, X., Lagun, D., Durand, F., Freeman, W.T., Matthews, M.: Alchemist: Parametric control of material properties with diffusion models. arXiv preprint arXiv:2312.02970 (2023)", + "42. Sharma, P., Philip, J., Gharbi, M., Freeman, B., Durand, F., Deschaintre, V.: Materialistic: Selecting similar materials in images. ACM Transactions on Graphics (TOG) 42(4), 1-14 (2023)", + "43. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)", + "44. Subias, J.D., Lagunas, M.: In-the-wild material appearance editing using perceptual attributes. In: Computer Graphics Forum. vol. 42, pp. 333-345. Wiley Online Library (2023)", + "45. Upchurch, P., Niu, R.: A dense material segmentation dataset for indoor and outdoor scene parsing. In: European Conference on Computer Vision. pp. 450-466. Springer (2022)", + "46. Voynov, A., Chu, Q., Cohen-Or, D., Aberman, K.: $p+$ : Extended textual conditioning in text-to-image generation. arXiv preprint arXiv:2303.09522 (2023)", + "47. Wang, X., Darrell, T., Rambhatla, S.S., Girdhar, R., Misra, I.: Instance-diffusion: Instance-level control for image generation. arXiv preprint arXiv:2402.03290 (2024)", + "48. Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., Duan, N., Liu, Z., Liu, C., Zeng, M., et al.: Reco: Region-controlled text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14246-14255 (2023)", + "49. Ye, H., Zhang, J., Liu, S., Han, X., Yang, W.: Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721 (2023)", + "50. Yeh, Y.Y., Huang, J.B., Kim, C., Xiao, L., Nguyen-Phuoc, T., Khan, N., Zhang, C., Chandraker, M., Marshall, C.S., Dong, Z., et al.: Texturedreamer: Image-guided texture synthesis through geometry-aware diffusion. arXiv preprint arXiv:2401.09416 (2024)", + "51. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023)", + "52. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586-595 (2018)", + "53. Zhao, S., Chen, D., Chen, Y.C., Bao, J., Hao, S., Yuan, L., Wong, K.Y.K.: Unictrlnet: All-in-one control to text-to-image diffusion models. Advances in Neural Information Processing Systems 36 (2024)" + ], + "bbox": [ + 212, + 146, + 787, + 829 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "ZeST", + "bbox": [ + 692, + 114, + 730, + 126 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 16 + } +] \ No newline at end of file diff --git a/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_model.json b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b723f1639af22bb3252abee605caa1d494f05295 --- /dev/null +++ b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_model.json @@ -0,0 +1,2577 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.302, + 0.142, + 0.703, + 0.187 + ], + "angle": 0, + "content": "ZeST: Zero-Shot Material Transfer from a Single Image" + }, + { + "type": "text", + "bbox": [ + 0.297, + 0.212, + 0.706, + 0.245 + ], + "angle": 0, + "content": "Ta-Ying Cheng\\(^{1,2}\\), Prafull Sharma\\(^{3}\\), Andrew Markham\\(^{1}\\), Niki Trigoni\\(^{1}\\), and Varun Jampani\\(^{2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.307, + 0.254, + 0.454, + 0.27 + ], + "angle": 0, + "content": "1University of Oxford" + }, + { + "type": "text", + "bbox": [ + 0.485, + 0.254, + 0.574, + 0.27 + ], + "angle": 0, + "content": "\\(^{2}\\)Stability AI" + }, + { + "type": "text", + "bbox": [ + 0.605, + 0.254, + 0.696, + 0.269 + ], + "angle": 0, + "content": "\\(^{3}\\)MIT CSAIL" + }, + { + "type": "image", + "bbox": [ + 0.241, + 0.304, + 0.489, + 0.548 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.507, + 0.304, + 0.761, + 0.548 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.559, + 0.788, + 0.616 + ], + "angle": 0, + "content": "Fig. 1: Overview. We present ZeST, a zero-shot single-image approach to (a) transfer material from an exemplar image to an object in the input image. (b) ZeST can easily be extended to perform multiple material edits in an single image, and (c) perform implicit lighting-aware edits on rendering of a textured mesh." + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.661, + 0.741, + 0.826 + ], + "angle": 0, + "content": "Abstract. We propose ZeST, a method for zero-shot material transfer to an object in the input image given a material exemplar image. ZeST leverages existing diffusion adapters to extract implicit material representation from the exemplar image. This representation is used to transfer the material using pre-trained inpainting diffusion model on the object in the input image using depth estimates as geometry cue and grayscale object shading as illumination cues. The method works on real images without any training resulting a zero-shot approach. Both qualitative and quantitative results on real and synthetic datasets demonstrate that ZeST outputs photorealistic images with transferred materials. We also show the application of ZeST to perform multiple edits and robust material assignment under different illuminations." + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.827, + 0.574, + 0.84 + ], + "angle": 0, + "content": "Project Page: https://ttchengab.github.io/zest" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.375, + 0.161 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.176, + 0.787, + 0.341 + ], + "angle": 0, + "content": "Editing object materials in images (e.g., changing a marble statue into a steel statue) is useful for several graphics and design applications such as game design, e-commerce, etc. It is a highly challenging and time-consuming task even for expert artists and graphic designers - typically requires explicit 3D geometry and illumination estimation followed by careful tuning of the target material properties (e.g., metallic, roughness, transparency). Previous works try to alleviate the tedious material specification by synthesizing textures given input text prompts [39,50]. However, they are focused on texturing 3D meshes, which overlooks some of the unique challenges for material editing in 2D images, such as illumination. Another work [41] proposes fine-grained material editing on images, but it cannot directly transfer materials from a given exemplar." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.342, + 0.787, + 0.463 + ], + "angle": 0, + "content": "In this work, we aim to make 2D-to-2D material editing practical by eliminating the need for any 3D objects as well as explicit specification of material properties. Given a single image of an object and another material exemplar image, our goal is to transfer the material appearance from the exemplar to the target object directly in 2D. See Fig. 1 for some sample input and material exemplar images. We do not assume any access to the ground-truth 3D shapes, illumination, or even the material properties, making this problem setting practical and widely applicable for material editing." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.463, + 0.787, + 0.584 + ], + "angle": 0, + "content": "This setup is particularly challenging from two perspectives. First, an explicit approach to material transfer requires an understanding of many object-level properties in both the exemplar and the input image, such as geometry and illumination. Subsequently, we have to disentangle the material information from these properties and apply it to the new image; the entire process has several unsolved components. Second, there currently exists no real-world datasets for supervising this task. Collecting high-quality datasets presenting the same object with multiple materials and exemplars may be quite tedious." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.584, + 0.787, + 0.674 + ], + "angle": 0, + "content": "One of the main contributions of this work in alleviating these challenges is a zero-shot approach that can implicitly transfer arbitrary material appearances from a given 2D exemplar image onto a target 2D object image, without explicitly estimating any 3D or material properties from either image. We call our approach 'ZeST', as it does not require multiple exemplars or any training like previous works, making it easy to generalize to any images in the wild." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.674, + 0.787, + 0.795 + ], + "angle": 0, + "content": "With ZeST, we propose a carefully designed pipeline that repurposes several recent advances in 2D image generation and editing for our problem setting. At a high level, we adapt the geometry-guided generation (e.g., ControlNet [51]) and also exemplar-guided generation (e.g., IP-Adapter [49]) to implicitly isolate and transfer material appearance from a source exemplar to the target image while applying a foreground decolored image and inpainting for illumination cues. Our key contribution is presenting a simple pipeline with careful design choices that can be used to tackle a highly challenging problem of 2D-to-2D material transfer." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.795, + 0.787, + 0.84 + ], + "angle": 0, + "content": "Since this is a new problem setting, we created both synthetic and real-world evaluation datasets with material exemplars and object images. Extensive qualitative and quantitative evaluations demonstrate that ZeST excels in photo-" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.787, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.298 + ], + "angle": 0, + "content": "realism and material accuracy in the output images when compared against various baselines while being completely training-free. See Fig. 1(a) for sample results of ZeST. With our pipeline, artists can grab pre-designed materials as material exemplars and directly transfer them to real-world images. By using different object masks, we can also use ZeST to cast different materials to multiple objects present in a single image (Fig. 1 (b)). In addition, with slight alteration of the inputs, ZeST can perform light-aware material transfer by changing the reflections while keeping textural patterns consistent (Fig. 1 (c)); this method can have potential application when used in conjunction with 3D texture generation methods [10]." + }, + { + "type": "text", + "bbox": [ + 0.239, + 0.299, + 0.757, + 0.314 + ], + "angle": 0, + "content": "In summary, \\(ZeST\\) has several favorable properties for material editing:" + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.322, + 0.786, + 0.398 + ], + "angle": 0, + "content": "- Zero-shot, training free, single-image material transfer. By leveraging 2D generative priors, ZeST works in a zero-shot manner without needing dataset finetuning. Unlike some contemporary works [50] that implicitly capture material properties using several material images, ZeST only needs a single material exemplar image to transfer the material in pixel space." + }, + { + "type": "text", + "bbox": [ + 0.228, + 0.398, + 0.786, + 0.457 + ], + "angle": 0, + "content": "- No explicit 3D, illumination or materials. With 2D depth and segmentation estimation (which are readily available these days) and implicit material transfer, we eliminate the need for explicit specification of 3D meshes, illumination or material properties (say, in terms of BRDF)." + }, + { + "type": "text", + "bbox": [ + 0.228, + 0.458, + 0.787, + 0.533 + ], + "angle": 0, + "content": "- Several downstream applications. Given the simplistic and practical nature of our approach, ZeST can be used for several downstream graphics applications such as applying pre-designed materials to real-world images, editing multiple object materials in a single image, and perform lighting-aware material transfer given untextured mesh renderings." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.554, + 0.388, + 0.57 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.584, + 0.788, + 0.674 + ], + "angle": 0, + "content": "Diffusion Models. Denoising Diffusion Probabilistic models have emerged as the state-of-the-art for class-conditional and text-prompt conditioned image generation [18, 23-27, 43]. These models generate photorealistic images with exemplary geometry, materials, illumination, and scene composition. The models have been extended to be conditioned on input images for computational photography tasks such as super-resolution, style transfer, and inpainting." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.675, + 0.789, + 0.841 + ], + "angle": 0, + "content": "Further work demonstrate controllable generation conditioned on text-based instructions [8,20,22,46], semantic segmentation [4], bounding box [11,30,47,48], depth [6,53], sketch [34,51], and image prompt [49]. Prompt-to-prompt and Prompt+ edit the input image by performing inversion followed by the introduction of new terms and reweighting the effect of terms in the input prompt [22,46]. InstructPix2Pix performs edits an input image conditioned on an instruction [7]. Ge et al. proposed rich text based image editing allowing for style assignment and specific description to specific terms in the prompt [20]. While these methods edit the image semantically and high-level descriptions, assigning specific materials using text-based approach is challenging since text acts as a limiting modality for describing textures." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.298 + ], + "angle": 0, + "content": "A collection of reference images can be used to learn concepts which can be further included in text prompts to generate images with the learned concepts [12, 29, 40]. Spatial modalities such as depth and sketches have been used for controlling the generated images [34, 49, 51]. Pre-trained text-to-image models can be leveraged for 3D-aware image editing using language and depth cues [13, 33, 35]. The use of ControlNet has been extended by Bhat et al. to use depth for controlling the scene composition while maintaining other scene attributes [6]. Object orientation, illumination, and other object attributes can be controlled in a continuous manner using ControlNet and learned continuous tokens embedding the 3D properties [13]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.298, + 0.788, + 0.465 + ], + "angle": 0, + "content": "Material acquisition and editing. Material acquisition and editing is an active field of research taking into account illumination and object geometry. Previous work has demonstrated material acquisition under known illumination conditions and camera [2,3,17]. Such acquisition in the wild requires localizing objects with similar materials, which has been facilitated by supervised material segmentation and leveraging pre-trained vision representation backbones [5,31,42,45]. Khan et al. introduced in-image material editing using estimates of depth [28]. Recent works have employed generative adversarial networks [21] for perceptual material editing [16, 44] and physical shader-based editing using text-to-image models [41]. The use of generative models has been extended to explicitly learning materials [32] and texturing 3D meshes [9, 10, 39, 50]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.465, + 0.788, + 0.526 + ], + "angle": 0, + "content": "In our work, we aim to use pre-trained image generation diffusion models to perform exemplar-based material transfer from a single image. We aim to use ControlNet and IP-adapter to perform material transfer in a zero-shot way without any training." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.548, + 0.331, + 0.564 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.58, + 0.788, + 0.673 + ], + "angle": 0, + "content": "In this section, we describe our method ZeST that performs exemplar-based material transfer. Recent methods perform the related problem of texture synthesis on meshes [39,50] by finetuning a diffusion model on 3-5 material exemplar images to capture the texture/material in the latent space. On the contrary, ZeST only requires a single material exemplar image and a single input image, accomplishing material transfer in a zero-shot, training-free manner." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.694, + 0.4, + 0.71 + ], + "angle": 0, + "content": "3.1 Problem Setting" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.81 + ], + "angle": 0, + "content": "Given a material exemplar image \\(M\\) and an input image \\(I\\), we aim to output an edited image \\(I_{gen}\\) from \\(I\\) by transferring the material from the material exemplar to the object in the input image while preserving other object and scene properties (e.g. object geometry, background, lighting etc.). Performing this task requires understanding the material, geometry, and illumination from both the exemplar and the input image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.788, + 0.84 + ], + "angle": 0, + "content": "In practice, estimating all the aforementioned object-level properties and further isolating material information explicitly from \\(M\\) is challenging since these" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.221, + 0.147, + 0.788, + 0.328 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.34, + 0.788, + 0.438 + ], + "angle": 0, + "content": "Fig. 2: ZeST Architecture. Given a material exemplar \\( M \\) and an input image \\( I \\), we first encode material exemplar with an image encoder (e.g., IP-Adaptor). Concurrently, we convert the input image into a depth map \\( D_I \\) and a foreground-grayscale image \\( I_{init} \\) to feed into the geometry and latent illumination guidance branch, respectively. By combining the two sources of guidance with the latent features from the material encoding, ZeST can transfer the material properties onto the object in input image while preserving all other attributes." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.469, + 0.788, + 0.532 + ], + "angle": 0, + "content": "properties are entangled in the pixel space. Therefore, we propose to tackle this problem in the latent space of diffusion models. Specifically, we aim to extract a latent representation \\( z_{M} \\) containing the material and texture information that we can then inject into a generative diffusion model \\( S \\) to generate \\( I_{gen} \\)." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.554, + 0.393, + 0.569 + ], + "angle": 0, + "content": "3.2 ZeST Overview" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.583, + 0.788, + 0.704 + ], + "angle": 0, + "content": "Since there exists no synthetic/real image dataset to supervise the learning of a 2D-to-2D material transfer, we perform the material transfer in a zero-shot training-free manner. We first break down this complex task into sub-problems of (1) encoding the material exemplar, (2) geometry-guided image editing, and (3) making the generation process illumination-aware. Given the recent advances in high-fidelity diffusion models and complementary adapters for image generation, we leverage existing pre-trained modules to tackle each of the sub-problems that together compose our pipeline to perform image-prompted material editing." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.788, + 0.765 + ], + "angle": 0, + "content": "Figure 2 presents an overview of our pipeline, which comprises three branches to guide the material, geometry, and lighting information, respectively. The Material Encoding branch takes the material exemplar image \\( M \\) as input, which is processed by the image encoder to obtain a material latent representation \\( z_{M} \\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.788, + 0.842 + ], + "angle": 0, + "content": "Concurrently, we feed the input image \\(I\\) into Geometry Guidance and Latent Illumination Guidance Branch. The Geometry Guidance branch computes the depth map \\(D_I\\) for the image \\(I\\), which is used as the input to ControlNet. The Latent Illumination Guidance branch computes a foreground mask \\(F\\) using \\(I\\) and creates a foreground-grayscale image \\(I_{init}\\), which we use as input to the" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.155, + 0.323, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.237, + 0.237, + 0.304, + 0.246 + ], + "angle": 0, + "content": "Material Exemplar" + }, + { + "type": "image", + "bbox": [ + 0.326, + 0.146, + 0.437, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.36, + 0.238, + 0.404, + 0.246 + ], + "angle": 0, + "content": "Input Image" + }, + { + "type": "image", + "bbox": [ + 0.44, + 0.155, + 0.546, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.443, + 0.237, + 0.541, + 0.246 + ], + "angle": 0, + "content": "Estimated Depth (Optional)" + }, + { + "type": "image_caption", + "bbox": [ + 0.629, + 0.146, + 0.727, + 0.155 + ], + "angle": 0, + "content": "IP-Adaptor Combinations" + }, + { + "type": "image", + "bbox": [ + 0.571, + 0.156, + 0.676, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.588, + 0.237, + 0.658, + 0.246 + ], + "angle": 0, + "content": "(a) \\(\\mathrm{Img2Img + Text}\\)" + }, + { + "type": "image", + "bbox": [ + 0.681, + 0.156, + 0.786, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.695, + 0.237, + 0.773, + 0.245 + ], + "angle": 0, + "content": "(b) ControlNet Model" + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.26, + 0.788, + 0.346 + ], + "angle": 0, + "content": "Fig. 3: The design choice of IP-Adaptor with ControlNet. Given the material exemplar and the input image, we dive into the different choices of utilizing the IP-Adaptor. In particular we realize that an \\(\\mathrm{Img2Img + }\\) text module (a) wouldn't properly transfer the materials properly to the main object. On the other hand, ControlNet (b) will preserve the geometry information of the given input. We thus utilize this as the starting point for geometry guidance to further explore the best illumination cues." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.373, + 0.785, + 0.448 + ], + "angle": 0, + "content": "Diffusion Inpainting pipeline. We concatenate the embeddings from ControlNet with the inpainting diffusion model at the corresponding and inject the material embedding \\( z_{M} \\) through the cross-attention. The output of the inpainting diffusion model, \\( I_{gen} \\), with the edited image containing the object in \\( I \\) cast with material from exemplar image \\( M \\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.449, + 0.785, + 0.479 + ], + "angle": 0, + "content": "Our design choices to facilitate computation of material embedding, geometry guidance, and illumination cues are discussed in the following sections." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.5, + 0.502, + 0.515 + ], + "angle": 0, + "content": "3.3 Encoding Material Exemplar" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.523, + 0.788, + 0.644 + ], + "angle": 0, + "content": "Given the material exemplar image \\( M \\), this branch encodes the image into a latent representation while preserving its material properties. Previous works [39, 50] address this by finetuning a text-to-image diffusion model to encode the image into a rare token, implicitly treating the rare token as a latent representation that can be used in conjunction with other texts for image generation. However, this approach of optimizing for the material token requires the time-consuming step for every new material exemplar and usually requires 3-5 images to prevent overfitting." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.788, + 0.735 + ], + "angle": 0, + "content": "We draw inspiration from the recently introduced IP-Adapter [49]. The IP adapter uses a CLIP image encoder to extract image features that can be injected into a diffusion model via the cross-attention layers. These features can be used as an additional condition to guide text prompts or other mediums for the generation. For example, one can input an image of a person and then describe \"on the mountain\" with text to obtain an image of the person in the mountains." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.735, + 0.788, + 0.84 + ], + "angle": 0, + "content": "However, we realize that IP-Adaptor does not work well when combined with an Img2Img pipeline, as shown in Figure 3 (a) for our task. Moreover, adding text guidances like \"changing the apple texture to golden bowl\" does not produce photorealistic output and does not preserve other scene information (i.e. background). This problem of geometry and material entanglement within material embedding \\( z_{M} \\) remains unsolved, thus motivating the need for geometry and illumination guidance." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.147, + 0.611, + 0.162 + ], + "angle": 0, + "content": "3.4 Geometry Guidance via Depth Estimation" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.17, + 0.788, + 0.306 + ], + "angle": 0, + "content": "Since decoupling geometry and material properties in images is challenging and requires additional training data, we provide an alternative solution where we enforce a stronger geometry prior to the diffusion model to overwrite the structural information present in \\( z_{M} \\). To this end, we adopt a depth-based ControlNet to provide geometry guidance from the input image \\( I \\). We observe that the geometry information from the depth map \\( D_{I} \\) overwrites the geometry information encoded in the \\( z_{M} \\) (see Figure 3 (b)). Note that with the geometry enforced by using depth-based ControlNet, we can successfully transfer the golden material of the bowl to the apple." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.306, + 0.788, + 0.411 + ], + "angle": 0, + "content": "While the use of ControlNet with IP-Adaptor is introduced in the original IP-Adaptor paper [49], we employ it for a different purpose contrary to applying new structural control over an object in the image (e.g., changing a person's pose). After extensively comparing various components for encoding the material exemplar and input image (analysis in Section 4.2), we find the depth-based guidance from pre-trained ControlNet helps us preserve the original geometry of the object for the task of material transfer." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.412, + 0.788, + 0.473 + ], + "angle": 0, + "content": "While the addition of ControlNet helps preserve the geometry, we observe that the results suffer from inconsistency in preserving the illumination and background from the input image. This is evident in Figure 3, where the background and the lighting changes differ from the input." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.494, + 0.557, + 0.509 + ], + "angle": 0, + "content": "3.5 Latent-space Illumination Guidance" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.517, + 0.788, + 0.621 + ], + "angle": 0, + "content": "Our final branch is primarily responsible for preserving the illumination and background in the input image. We propose two-fold guidance for illumination in the latent space during generation - an inpainting module and a foreground decoloring process. In addition to the attached IP-Adaptor and ControlNet, we adopt an inpainting diffusion model \\( S \\) instead of a standard generator. Specifically, our ControlNet-inpainting procedure takes in four conditions for image generation:" + }, + { + "type": "equation", + "bbox": [ + 0.408, + 0.624, + 0.785, + 0.64 + ], + "angle": 0, + "content": "\\[\nI _ {g e n} = \\mathcal {S} \\left(z _ {M}, D _ {I}, I _ {\\text {i n i t}}, F\\right), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.788, + 0.689 + ], + "angle": 0, + "content": "where \\( z_{M} \\) is the material encoding, \\( D_{I} \\) is the depth map computed for input image \\( I \\), \\( I_{init} \\) is the initial image to denoise from, and \\( F \\) is the foreground mask of target object in \\( I \\) which we are editing." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.689, + 0.788, + 0.841 + ], + "angle": 0, + "content": "We conduct an ablation on the various versions of \\( I_{init} \\), as shown in Figure 4. Specifically, we test out the following settings: (1) using the original input image, (2) initializing the foreground with random noise, and (3) using the foreground grayscale image. Intuitively, directly letting \\( I_{init} = I \\) (Setting (1)) would be a preferable option as \\( I \\) encompasses implicit lighting information (from the object's shading and the surrounding environment) while conveniently enforces all other parts of the image other than the object to remain the same. In practice, however, we found that using the original image inevitably introduces a strong prior of the base color from the input object (e.g. orange color of pumpkin), which would be entangled with the material base color from \\( M \\) in the output" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.356, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.143, + 0.492, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.514, + 0.143, + 0.788, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.27, + 0.788, + 0.34 + ], + "angle": 0, + "content": "Fig. 4: Ablating input for illumination guidance. To validate our design choice of the foreground-grayscale image for initializing inpainting, we compare the generated results against using the original image and random noise as inputs. The original image presents a strong base color prior that perturbs the generation, while the random image neglects shading information, leading to wrong lighting in both examples." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.367, + 0.789, + 0.503 + ], + "angle": 0, + "content": "image. This artifact is sustained even when we significantly extend the number of denoising steps. On the other hand, when initializing \\( I_{init} \\) with random noise, the method indeed removes the base color prior but also removes the shading information causing incorrect illuminations in the synthesized object (e.g., the left side of the synthesized pumpkin is darker, but light is coming from the left). In our proposed pipeline, we perform grayscale operations in the pixel space for the object region (3). This provides a balanced solution of removing the strong color priors from the input image while keeping the shading cues for the inpainting diffusion model." + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.504, + 0.518, + 0.52 + ], + "angle": 0, + "content": "Thus, we propose to initialize \\( I_{init} \\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.384, + 0.529, + 0.785, + 0.547 + ], + "angle": 0, + "content": "\\[\nI _ {\\text {i n i t}} = F \\odot I _ {\\text {g r a y}} + (1 - F) \\odot I, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.555, + 0.788, + 0.603 + ], + "angle": 0, + "content": "which converts the color of foreground object in the image to grayscale. \\((1 - F)\\odot I\\) implicitly preserves the lighting direction, intensity, and color information, and \\(F\\odot I_{gray}\\) preserves the object's shading information without base color prior." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.621, + 0.457, + 0.637 + ], + "angle": 0, + "content": "3.6 Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.645, + 0.788, + 0.737 + ], + "angle": 0, + "content": "We implement our method using Stable Diffusion XL Inpainting [36] with the corresponding version of depth-based ControlNet [51] and IP-Adaptor [49]. We use Dense Prediction Transformers for depth estimation [38] and \\(\\mathrm{Rembg}^1\\) for foreground extraction. Our method is implemented in PyTorch and runs on a single Nvidia A-10 GPU with 24 GB of RAM. For all Dreambooth approaches, we use the official LoRA-Dreambooth provided by Diffusers." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.758, + 0.376, + 0.775 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.787, + 0.787, + 0.819 + ], + "angle": 0, + "content": "We evaluate the efficacy of our method against various baselines. We also present several examples of downstream applications using our method." + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.824, + 0.516, + 0.841 + ], + "angle": 0, + "content": "1 https://github.com/danielgatis/rembg" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.144, + 0.357, + 0.216 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.362, + 0.144, + 0.499, + 0.217 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.507, + 0.144, + 0.643, + 0.217 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.652, + 0.145, + 0.787, + 0.216 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.218, + 0.356, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.362, + 0.218, + 0.498, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.506, + 0.218, + 0.642, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.652, + 0.218, + 0.787, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.285, + 0.356, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.362, + 0.285, + 0.498, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.506, + 0.285, + 0.642, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.652, + 0.285, + 0.787, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.352, + 0.356, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.362, + 0.352, + 0.498, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.506, + 0.352, + 0.643, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.652, + 0.352, + 0.787, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.43, + 0.788, + 0.514 + ], + "angle": 0, + "content": "Fig. 5: Qualitative results on diverse materials. We present results of material transfer from a diverse set of material exemplar images. Even when perturbed by lighting and complex geometry, ZeST can still isolate the material information from the exemplar image and transfer to various objects while preserving the original geometry and illumination conditions. Note the change in specular regions as shinier materials are chosen in the case of the car made of brass and the dinosaur made of shiny steel." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.547, + 0.334, + 0.56 + ], + "angle": 0, + "content": "4.1 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.579, + 0.787, + 0.625 + ], + "angle": 0, + "content": "As the first to propose this problem, we create two datasets for comparison and evaluation. The real-world datasets provide us an understanding of our model's robustness, while the synthetic dataset is used for standard quantitative metrics." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.626, + 0.787, + 0.702 + ], + "angle": 0, + "content": "Real-World Dataset. We curate a dataset comprising of 30 diverse material exemplars and 30 input images, collected from copyright-free image sources (i.e. Unsplash) and images generated by DALLE-3. All of these images are object-centric, where there exists a main object in the foreground to which we are extracting the material from or applying the material onto." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.703, + 0.787, + 0.809 + ], + "angle": 0, + "content": "Synthetic Dataset. To perform quantitative evaluation, we use Blender to create a synthesized dataset of 9 materials randomly initialized by adjusting the base color, metallic, and roughness, and 20 meshes of different categories from Objaverse [15] rendered at three random viewpoints each, generating 540 ground-truth renderings. We render spheres assigned with each material individually and use the rendered image the material exemplar and pre-textured mesh rendering as input for all methods." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.81, + 0.787, + 0.842 + ], + "angle": 0, + "content": "While \\(ZeST\\) is completely training-free, other methods of learning materials (e.g., Dreambooth) require further fine-tuning for every exemplar given. This" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "image", + "bbox": [ + 0.216, + 0.145, + 0.787, + 0.413 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.424, + 0.788, + 0.509 + ], + "angle": 0, + "content": "Fig. 6: Qualitative comparisons against baselines. Given the material exemplar and input image in the first column, we compare our method to five different baselines. Without any geometry guidance, all image editing baselines fail to impose the correct geometry of the input image. On the other hand, using Dreambooth with our geometry and illumination guidance often contains albedo shifts, potentially due to information loss when encoding material properties into a word token." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.537, + 0.786, + 0.569 + ], + "angle": 0, + "content": "makes it infeasible to scale up the two datasets. Both our datasets are of comparable sizes to previous works on finetuning diffusion models [40, 50]." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.59, + 0.421, + 0.604 + ], + "angle": 0, + "content": "4.2 Qualitative Results" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.614, + 0.788, + 0.779 + ], + "angle": 0, + "content": "Material transfer results on real images. To demonstrate the application of ZeST on a wide range of materials and objects, we present examples of material transfer in Figure 5. The first three rows present results on real-world images, while the fourth row shows results using PBR materials [1]. Based on the examples, we observe that the material is properly disentangled from the geometry in the material exemplar and follows the shape of the object in the input image. This is particularly evident in the results of the orange, frog, and Groot toy figure, where the material is completely flat. We also notice accurate shadings in the bust and table examples when comparing them against their inputs. In the car and toy dinosaur examples, the reflections from the exemplars are isolated from the textural patterns and cast reasonably based on the illumination cues." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Qualitative comparisons. Since our work is the first to perform material transfer in latent space, we modified existing methods to compare against. Specifically, since existing image-guided texture synthesis methods utilize Dreambooth for their first step to encode the textures from images into word tokens [14,39,50]," + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.253 + ], + "angle": 0, + "content": "we set Dreambooth as the backbone for learning material properties and combine with text-guided image editing techniques for comparison, including MasaCtrl and Instruct-Pix2Pix, and using ZeST but swapping out the IP-Adaptor with text. While our method is training-free, Dreambooth requires finetuning for every material exemplar given. We also explore alternative options to combine with IP-Adaptor, including text-guided inpainting and Instruct-Pix2Pix with the prompt \"Change the texture of the object\"." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.253, + 0.785, + 0.449 + ], + "angle": 0, + "content": "We present qualitative comparisons against the baselines on four exemplar and input images in Figure 6. By using Inpainting with Text prompt instead of ControlNet, the model ignores the geometry of the original input when casting the materials. In both cases when using Instruct-Pix2Pix (with IP-Adaptor or Dreambooth), the geometry of all objects is better preserved, but the model fails to capture the material property from the material exemplar image. The combination of Dreambooth and MasaCtrl fails to preserve the geometry of the object in the input image and misattributes the material. The closest baseline to ours is Dreambooth with our proposed geometry and illumination guidance; however, we observe that the word encoding process results in some information loss as evident in the color shifts of the backpack and the astronaut figure. Furthermore, the method requires additional training for every material exemplar, whereas ZeST takes roughly 15 seconds to generate the image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.45, + 0.784, + 0.526 + ], + "angle": 0, + "content": "Our method, ZeST, performs the task effectively by retaining the object geometry, scene illumination, and attributing the material correctly. Additionally, note that ZeST adapts to more challenging material exemplar images, such as transparent materials (glass cup in Figure 6 Row 3) and images with other minor objects (additional hand in Figure 6 Row 4)." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.546, + 0.479, + 0.561 + ], + "angle": 0, + "content": "4.3 Quantitative Comparisons" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.569, + 0.784, + 0.674 + ], + "angle": 0, + "content": "We follow previous work [41, 50] and use the synthetic images to compare all methods in terms of PSNR, LPIPS [52], and CLIP similarity score [37] against ground truth renderings. We also incorporate another DreamSim [19], a more recent metric that is more similar to human references. We grab IP-Adaptor + Instruct-Pix2Pix and Dreambooth + our geometry and illumination guidance as baselines, as they are the strongest (and only) performers from our qualitative comparisons that can roughly edit the material based on the geometry." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.674, + 0.784, + 0.764 + ], + "angle": 0, + "content": "Table 1 (left) presents our results. We see a dramatic improvement when shifting from the instruct-pix2pix pipeline to our geometry and illumination guidance. While using Dreambooth performs similarly to our IP-Adaptor in the synthetic dataset, it requires a fine-tuned model for each material exemplar, making it unfeasible to scale up. In addition, we show in the next section that our method excels in real-world datasets." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.784, + 0.84 + ], + "angle": 0, + "content": "- **User Study.** We also create a user study with 16 participants to understand the capability of our model given real-world materials tested on real images. Each subject is shown 5 random samples from the 900 combinations generated from the dataset with our method and against the two strongest baselines: Dreambooth + ControlNet-Inpainting and IP-Adaptor + Instruct-Pix2Pix. We ask" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.243 + ], + "angle": 0, + "content": "Table 1: Quantitative Comparisons and User Study. We grab the strongest baselines in our qualitative comparisons for additional studies. Left: We measure the PSNR, LPIPS [52], CLIP similarity score [37], and DreamSim [19] in a quantitative study on the synthetic dataset of 540 exemplar-input combinations. Right: We perform a user study to evaluate the material fidelity and photorealism of the edited images from each method. We randomly sample 5 out of 900 real-world exemplar-input combinations for each of the 16 participants." + }, + { + "type": "table", + "bbox": [ + 0.219, + 0.254, + 0.785, + 0.304 + ], + "angle": 0, + "content": "
PSNR↑LPIPS↓CLIP↑DreamSim↓Fidelity↑Photorealism↑
IP-Adaptor + Instruct-Pix2Pix17.080.0990.7400.390IP-Adaptor + Instruct-Pix2Pix1.48
DB + Our Geo/illum. Guidance25.520.0580.8740.238DB + Our Geo/illum. Guidance3.25
Ours25.590.0530.8830.198Ours4.05
" + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.318, + 0.492, + 0.424 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.525, + 0.319, + 0.787, + 0.423 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.436, + 0.788, + 0.493 + ], + "angle": 0, + "content": "Fig. 7: Robustness to lighting and object pose. We present two types of robustness testing. (a): Robustness to changing the material exemplar lighting and pose. (b): Zooming into the material exemplar. Our model yields highly similar results in both, showing the capability to adapt to these external changes." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.522, + 0.785, + 0.581 + ], + "angle": 0, + "content": "each subject to rate each image from 1 to 5 based on (1) material fidelity: how close the material in the generated image is compared to the original exemplar and (2) photorealism: how realistic the generated image is. Our results are summarized in Table 1 (right)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.583, + 0.788, + 0.658 + ], + "angle": 0, + "content": "Our results show significant improvements from the two baselines in both material fidelity and photorealism of the edited image. The score improvements are also greater in real-world scenarios compared to synthetic ones. This could be the result of information loss during finetuning and overfitting to the exemplar background, which is less significant under controlled synthetic scenarios." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.68, + 0.469, + 0.695 + ], + "angle": 0, + "content": "4.4 Robustness of the Model" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.785, + 0.734 + ], + "angle": 0, + "content": "In addition to the diverse set of results presented in Figure 5, we extensively test out the behavior of ZeST with special cases of material exemplar images." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.735, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Relighting and rotating the object in the material exemplar image. A good material extractor should be agnostic to small lighting and rotation changes of the same object used as the material exemplar. To evaluate this, we render a random material and cast it onto an irregular-shaped pumpkin (another example is in the Appendix). We then render three samples of the pumpkin, a default lighting orientation, a change in lighting direction pitch by 120 degrees, and a random rotation, as shown in 7 (a). The transferred materials onto the dolphin" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "image", + "bbox": [ + 0.216, + 0.145, + 0.497, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.145, + 0.788, + 0.27 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.28, + 0.789, + 0.337 + ], + "angle": 0, + "content": "Fig. 8: Multiple Material Transfers in a Single Image. By replacing the foreground extraction with an open-vocabulary segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be applied iteratively to cast different material properties to different objects in a single RGB image." + }, + { + "type": "image", + "bbox": [ + 0.216, + 0.351, + 0.785, + 0.436 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.45, + 0.788, + 0.492 + ], + "angle": 0, + "content": "Fig.9: Lighting-aware Image Editing. Given a rendering of a textured mesh, we can alter \\(ZeST\\) slightly to achieve lighting-aware material edit. It can be seen from both examples where the reflection can be disentangled from the object texture." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.522, + 0.785, + 0.551 + ], + "angle": 0, + "content": "remain roughly consistent across all samples, showing that our method is fairly resistant to these changes at a small scale." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.552, + 0.788, + 0.658 + ], + "angle": 0, + "content": "Effect of image scale of material exemplar image. To examine the effect of the scale of the material exemplar, we first use an image of a woolen cloth material with a distinctive repeating pattern and apply our method to an image of a chair. Then, we zoom into the exemplar image manually to the edge only very few repeated patterns are left. Our results in Figure 7 (b) show that while the scale of the material is drastically different, the model automatically re-adjusts the patterns into a reasonable size to be cast onto the input image." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.68, + 0.367, + 0.695 + ], + "angle": 0, + "content": "4.5 Applications" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.788, + 0.795 + ], + "angle": 0, + "content": "Applying multiple materials to multiple objects. By replacing the foreground extraction with a segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be used to iteratively change multiple materials in a single image. Figure 8 presents two examples of editing multiple objects in a single image. As evident in the transparent glass chair where the wooden table behind is roughly visible, ZeST generalizes to complex scenes with multiple objects." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.796, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Lighting-aware Material Transfer. Given a material exemplar image and an untextured mesh rendered under multiple illumination conditions, \\(ZeST\\) can also perform lighting-aware material transfer. Specifically, we first generate the" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.145, + 0.493, + 0.229 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.145, + 0.788, + 0.229 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.239, + 0.788, + 0.31 + ], + "angle": 0, + "content": "Fig. 10: Limitations. Our method primarily fails in two modes. (a) The model sometimes picks the most \"probable\" areas to transfer the material, instead of casting the material on the entire object. (b) If two textures are present in the exemplar image (e.g., foreground and background of the tennis ball, the glazed top and bottom logo of the cup), the model sometimes combine both materials when performing the edit." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.337, + 0.789, + 0.474 + ], + "angle": 0, + "content": "materials and textures of the image under Lighting 1 using ZeST. Then, by fixing the same seed during generation and using the generating image given the first lighting as the input to the second, we can enforce consistency in the material and texture generated (details of implementation in Appendix) while changing the reflections. We show examples of transferring the glazed cup material to two mesh renders in Figure 9. ZeST successfully disentangles the reflections while keeping most textural patterns consistent between the two images. This technique could potentially be applied jointly with other 3D texture synthesis works [10] and be helpful to applications such as e-commerce design." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.494, + 0.357, + 0.508 + ], + "angle": 0, + "content": "4.6 Limitations" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.517, + 0.789, + 0.669 + ], + "angle": 0, + "content": "Since \\(ZeST\\) operates majorly in the latent space, the model sometimes exhibits uncontrollable behaviors based on its image understanding. Figure 10 presents two forms of more frequent failure cases: (a) Partial material transfer: the material is only transferred to parts instead of the entirety of the object. We hypothesize that the failure stems from the entanglement of material properties and the exemplar's identity, as the material is only applied to where it seems the most probable (e.g., only apply the jacket material to the statue's body). (b) Blending multiple materials: since the current IP-Adaptor does not have a module to extract regions of an image for material transfer, \\(ZeST\\) sometimes mixes up multiple materials in the exemplar image during transfer." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.69, + 0.36, + 0.706 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.789, + 0.841 + ], + "angle": 0, + "content": "We present ZeST, a zero-shot, training-free method for exemplar-based material-editing. ZeST is built completely using readily available pre-trained models and demonstrates generalizable and robust results on real images. We curate synthetic and real image datasets to evaluate the performance of our approach. We also demonstrate downstream applications like multiple edits in a single image and material-aware relighting. ZeST serves as a strong starting point for future research in image-to-image material transfer, implying opportunities of leveraging pre-trained image diffusion models for complex graphic designing tasks." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.323, + 0.16 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.177, + 0.65, + 0.19 + ], + "angle": 0, + "content": "1. https://wwwtexts.com/browse/pbr-materials/114558" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.191, + 0.786, + 0.218 + ], + "angle": 0, + "content": "2. Aittala, M., Weyrich, T., Lehtinen, J.: Practical svbrdf capture in the frequency domain. ACM Trans. Graph. 32(4), 110-1 (2013)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.219, + 0.786, + 0.245 + ], + "angle": 0, + "content": "3. Aittala, M., Weyrich, T., Lehtinen, J., et al.: Two-shot svbrdf capture for stationary materials. ACM Trans. Graph. 34(4), 110-1 (2015)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.246, + 0.786, + 0.273 + ], + "angle": 0, + "content": "4. Bar-Tal, O., Yariv, L., Lipman, Y., Dekel, T.: Multidiffusion: Fusing diffusion paths for controlled image generation (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.274, + 0.786, + 0.314 + ], + "angle": 0, + "content": "5. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3479-3487 (2015)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.315, + 0.786, + 0.343 + ], + "angle": 0, + "content": "6. Bhat, S.F., Mitra, N.J., Wonka, P.: Loosecontrol: Lifting controlnet for generalized depth conditioning. arXiv preprint arXiv:2312.03079 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.343, + 0.786, + 0.384 + ], + "angle": 0, + "content": "7. Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: Learning to follow image editing instructions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18392-18402 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.384, + 0.786, + 0.424 + ], + "angle": 0, + "content": "8. Cao, M., Wang, X., Qi, Z., Shan, Y., Qie, X., Zheng, Y.: Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. arXiv preprint arXiv:2304.08465 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.426, + 0.786, + 0.467 + ], + "angle": 0, + "content": "9. Cao, T., Kreis, K., Fidler, S., Sharp, N., Yin, K.: Texfusion: Synthesizing 3d textures with text-guided image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4169-4181 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.468, + 0.786, + 0.508 + ], + "angle": 0, + "content": "0. Chen, D.Z., Siddiqui, Y., Lee, H.Y., Tulyakov, S., Nießner, M.: Text2tex: Text-driven texture synthesis via diffusion models. arXiv preprint arXiv:2303.11396 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.509, + 0.786, + 0.55 + ], + "angle": 0, + "content": "1. Chen, M., Laina, I., Vedaldi, A.: Training-free layout control with cross-attention guidance. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 5343-5353 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.55, + 0.786, + 0.591 + ], + "angle": 0, + "content": "2. Chen, W., Hu, H., Li, Y., Ruiz, N., Jia, X., Chang, M.W., Cohen, W.W.: Subject-driven text-to-image generation via apprenticeship learning. Advances in Neural Information Processing Systems 36 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.591, + 0.786, + 0.633 + ], + "angle": 0, + "content": "3. Cheng, T.Y., Gadelha, M., Groueix, T., Fisher, M., Mech, R., Markham, A., Trigoni, N.: Learning continuous 3d words for text-to-image generation. arXiv preprint arXiv:2402.08654 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.633, + 0.786, + 0.674 + ], + "angle": 0, + "content": "4. Corneanu, C., Gadde, R., Martinez, A.M.: Latentpaint: Image inpainting in latent space with diffusion models. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 4334-4343 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.675, + 0.786, + 0.73 + ], + "angle": 0, + "content": "5. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A universe of annotated 3d objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13142-13153 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.73, + 0.786, + 0.771 + ], + "angle": 0, + "content": "6. Delanoy, J., Lagunas, M., Condor, J., Gutierrez, D., Masia, B.: A generative framework for image-based editing of material appearance using perceptual attributes. In: Computer Graphics Forum. vol. 41, pp. 453-464. Wiley Online Library (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.771, + 0.786, + 0.813 + ], + "angle": 0, + "content": "7. Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., Bousseau, A.: Flexible svbrdf capture with a multi-image deep network. In: Computer graphics forum. vol. 38, pp. 1-13. Wiley Online Library (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.813, + 0.786, + 0.84 + ], + "angle": 0, + "content": "8. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021)" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.177, + 0.786, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.787, + 0.189 + ], + "angle": 0, + "content": "19. Fu*, S., Tamir*, N., Sundaram*, S., Chai, L., Zhang, R., Dekel, T., Isola, P.: Dreamsim: Learning new dimensions of human visual similarity using synthetic data. NeurIPS (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.19, + 0.786, + 0.232 + ], + "angle": 0, + "content": "20. Ge, S., Park, T., Zhu, J.Y., Huang, J.B.: Expressive text-to-image generation with rich text. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7545-7556 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.232, + 0.787, + 0.272 + ], + "angle": 0, + "content": "21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139-144 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.273, + 0.786, + 0.313 + ], + "angle": 0, + "content": "22. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.315, + 0.786, + 0.342 + ], + "angle": 0, + "content": "23. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.343, + 0.787, + 0.383 + ], + "angle": 0, + "content": "24. Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M., Salimans, T.: Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research 23(1), 2249-2281 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.384, + 0.786, + 0.411 + ], + "angle": 0, + "content": "25. Ho, J., Salimans, T.: Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.412, + 0.786, + 0.452 + ], + "angle": 0, + "content": "26. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10124-10134 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.453, + 0.786, + 0.493 + ], + "angle": 0, + "content": "27. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.494, + 0.786, + 0.522 + ], + "angle": 0, + "content": "28. Khan, E.A., Reinhard, E., Fleming, R.W., Bülthoff, H.H.: Image-based material editing. ACM Transactions on Graphics (TOG) 25(3), 654-663 (2006)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.523, + 0.786, + 0.563 + ], + "angle": 0, + "content": "29. Kumari, N., Zhang, B., Zhang, R., Shechtman, E., Zhu, J.Y.: Multi-concept customization of text-to-image diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1931-1941 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.565, + 0.786, + 0.605 + ], + "angle": 0, + "content": "30. Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., Li, C., Lee, Y.J.: Gligen: Open-set grounded text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22511-22521 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.606, + 0.786, + 0.646 + ], + "angle": 0, + "content": "31. Liang, Y., Wakaki, R., Nobuhara, S., Nishino, K.: Multimodal material segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 19800-19808 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.647, + 0.786, + 0.674 + ], + "angle": 0, + "content": "32. Lopes, I., Pizzati, F., de Charette, R.: Material palette: Extraction of materials from a single image. arXiv preprint arXiv:2311.17060 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.675, + 0.786, + 0.715 + ], + "angle": 0, + "content": "33. Michel, O., Bhattad, A., VanderBilt, E., Krishna, R., Kembhavi, A., Gupta, T.: Object 3dit: Language-guided 3d-aware image editing. Advances in Neural Information Processing Systems 36 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.716, + 0.786, + 0.757 + ], + "angle": 0, + "content": "34. Mou, C., Wang, X., Xie, L., Zhang, J., Qi, Z., Shan, Y., Qie, X.: T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.758, + 0.786, + 0.799 + ], + "angle": 0, + "content": "35. Pandey, K., Guerrero, P., Gadelha, M., Hold-Geoffroy, Y., Singh, K., Mitra, N.: Diffusion handles: Enabling 3d edits for diffusion models by lifting activations to 3d. arXiv preprint arXiv:2312.02190 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.799, + 0.786, + 0.84 + ], + "angle": 0, + "content": "36. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.693, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeST" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.203 + ], + "angle": 0, + "content": "37. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.204, + 0.787, + 0.244 + ], + "angle": 0, + "content": "38. Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12179-12188 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.245, + 0.786, + 0.271 + ], + "angle": 0, + "content": "39. Richardson, E., Metzer, G., Alaluf, Y., Giryes, R., Cohen-Or, D.: Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.272, + 0.787, + 0.311 + ], + "angle": 0, + "content": "40. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.313, + 0.787, + 0.352 + ], + "angle": 0, + "content": "41. Sharma, P., Jampani, V., Li, Y., Jia, X., Lagun, D., Durand, F., Freeman, W.T., Matthews, M.: Alchemist: Parametric control of material properties with diffusion models. arXiv preprint arXiv:2312.02970 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.353, + 0.787, + 0.393 + ], + "angle": 0, + "content": "42. Sharma, P., Philip, J., Gharbi, M., Freeman, B., Durand, F., Deschaintre, V.: Materialistic: Selecting similar materials in images. ACM Transactions on Graphics (TOG) 42(4), 1-14 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.395, + 0.787, + 0.421 + ], + "angle": 0, + "content": "43. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.422, + 0.787, + 0.462 + ], + "angle": 0, + "content": "44. Subias, J.D., Lagunas, M.: In-the-wild material appearance editing using perceptual attributes. In: Computer Graphics Forum. vol. 42, pp. 333-345. Wiley Online Library (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.463, + 0.787, + 0.502 + ], + "angle": 0, + "content": "45. Upchurch, P., Niu, R.: A dense material segmentation dataset for indoor and outdoor scene parsing. In: European Conference on Computer Vision. pp. 450-466. Springer (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.503, + 0.787, + 0.53 + ], + "angle": 0, + "content": "46. Voynov, A., Chu, Q., Cohen-Or, D., Aberman, K.: \\( p+ \\): Extended textual conditioning in text-to-image generation. arXiv preprint arXiv:2303.09522 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.531, + 0.787, + 0.556 + ], + "angle": 0, + "content": "47. Wang, X., Darrell, T., Rambhatla, S.S., Girdhar, R., Misra, I.: Instance-diffusion: Instance-level control for image generation. arXiv preprint arXiv:2402.03290 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.557, + 0.787, + 0.611 + ], + "angle": 0, + "content": "48. Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., Duan, N., Liu, Z., Liu, C., Zeng, M., et al.: Reco: Region-controlled text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14246-14255 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.612, + 0.787, + 0.652 + ], + "angle": 0, + "content": "49. Ye, H., Zhang, J., Liu, S., Han, X., Yang, W.: Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.653, + 0.787, + 0.707 + ], + "angle": 0, + "content": "50. Yeh, Y.Y., Huang, J.B., Kim, C., Xiao, L., Nguyen-Phuoc, T., Khan, N., Zhang, C., Chandraker, M., Marshall, C.S., Dong, Z., et al.: Texturedreamer: Image-guided texture synthesis through geometry-aware diffusion. arXiv preprint arXiv:2401.09416 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.708, + 0.787, + 0.748 + ], + "angle": 0, + "content": "51. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.749, + 0.787, + 0.789 + ], + "angle": 0, + "content": "52. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586-595 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.79, + 0.787, + 0.83 + ], + "angle": 0, + "content": "53. Zhao, S., Chen, D., Chen, Y.C., Bao, J., Hao, S., Yuan, L., Wong, K.Y.K.: Unictrlnet: All-in-one control to text-to-image diffusion models. Advances in Neural Information Processing Systems 36 (2024)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.83 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_origin.pdf b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..077fcf10b34b9b2e541a0a9d18c6c3cd042444e4 --- /dev/null +++ b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/17e0ba8e-78d4-4a9f-a1be-08d875a8aa70_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:723e6365716fd37e7b1bd8fc98eb5d2df95df25e46cf8ce674aa055eb4e298e1 +size 9248756 diff --git a/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/full.md b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7da41a0eb473074c8c9da48d864c159dcdf716bf --- /dev/null +++ b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/full.md @@ -0,0 +1,327 @@ +# ZeST: Zero-Shot Material Transfer from a Single Image + +Ta-Ying Cheng $^{1,2}$ , Prafull Sharma $^{3}$ , Andrew Markham $^{1}$ , Niki Trigoni $^{1}$ , and Varun Jampani $^{2}$ + +1University of Oxford + +$^{2}$ Stability AI + +$^{3}$ MIT CSAIL + +![](images/0ccf3b5fbb8ae568fe5a4d284565a4ac400feb4430884775af6f00cd5777436e.jpg) +Fig. 1: Overview. We present ZeST, a zero-shot single-image approach to (a) transfer material from an exemplar image to an object in the input image. (b) ZeST can easily be extended to perform multiple material edits in an single image, and (c) perform implicit lighting-aware edits on rendering of a textured mesh. + +![](images/13158a1908a34fa45622aa16676426695e899ff5f634eccbada0e3b622f65912.jpg) + +Abstract. We propose ZeST, a method for zero-shot material transfer to an object in the input image given a material exemplar image. ZeST leverages existing diffusion adapters to extract implicit material representation from the exemplar image. This representation is used to transfer the material using pre-trained inpainting diffusion model on the object in the input image using depth estimates as geometry cue and grayscale object shading as illumination cues. The method works on real images without any training resulting a zero-shot approach. Both qualitative and quantitative results on real and synthetic datasets demonstrate that ZeST outputs photorealistic images with transferred materials. We also show the application of ZeST to perform multiple edits and robust material assignment under different illuminations. + +Project Page: https://ttchengab.github.io/zest + +# 1 Introduction + +Editing object materials in images (e.g., changing a marble statue into a steel statue) is useful for several graphics and design applications such as game design, e-commerce, etc. It is a highly challenging and time-consuming task even for expert artists and graphic designers - typically requires explicit 3D geometry and illumination estimation followed by careful tuning of the target material properties (e.g., metallic, roughness, transparency). Previous works try to alleviate the tedious material specification by synthesizing textures given input text prompts [39,50]. However, they are focused on texturing 3D meshes, which overlooks some of the unique challenges for material editing in 2D images, such as illumination. Another work [41] proposes fine-grained material editing on images, but it cannot directly transfer materials from a given exemplar. + +In this work, we aim to make 2D-to-2D material editing practical by eliminating the need for any 3D objects as well as explicit specification of material properties. Given a single image of an object and another material exemplar image, our goal is to transfer the material appearance from the exemplar to the target object directly in 2D. See Fig. 1 for some sample input and material exemplar images. We do not assume any access to the ground-truth 3D shapes, illumination, or even the material properties, making this problem setting practical and widely applicable for material editing. + +This setup is particularly challenging from two perspectives. First, an explicit approach to material transfer requires an understanding of many object-level properties in both the exemplar and the input image, such as geometry and illumination. Subsequently, we have to disentangle the material information from these properties and apply it to the new image; the entire process has several unsolved components. Second, there currently exists no real-world datasets for supervising this task. Collecting high-quality datasets presenting the same object with multiple materials and exemplars may be quite tedious. + +One of the main contributions of this work in alleviating these challenges is a zero-shot approach that can implicitly transfer arbitrary material appearances from a given 2D exemplar image onto a target 2D object image, without explicitly estimating any 3D or material properties from either image. We call our approach 'ZeST', as it does not require multiple exemplars or any training like previous works, making it easy to generalize to any images in the wild. + +With ZeST, we propose a carefully designed pipeline that repurposes several recent advances in 2D image generation and editing for our problem setting. At a high level, we adapt the geometry-guided generation (e.g., ControlNet [51]) and also exemplar-guided generation (e.g., IP-Adapter [49]) to implicitly isolate and transfer material appearance from a source exemplar to the target image while applying a foreground decolored image and inpainting for illumination cues. Our key contribution is presenting a simple pipeline with careful design choices that can be used to tackle a highly challenging problem of 2D-to-2D material transfer. + +Since this is a new problem setting, we created both synthetic and real-world evaluation datasets with material exemplars and object images. Extensive qualitative and quantitative evaluations demonstrate that ZeST excels in photo- + +realism and material accuracy in the output images when compared against various baselines while being completely training-free. See Fig. 1(a) for sample results of ZeST. With our pipeline, artists can grab pre-designed materials as material exemplars and directly transfer them to real-world images. By using different object masks, we can also use ZeST to cast different materials to multiple objects present in a single image (Fig. 1 (b)). In addition, with slight alteration of the inputs, ZeST can perform light-aware material transfer by changing the reflections while keeping textural patterns consistent (Fig. 1 (c)); this method can have potential application when used in conjunction with 3D texture generation methods [10]. + +In summary, $ZeST$ has several favorable properties for material editing: + +- Zero-shot, training free, single-image material transfer. By leveraging 2D generative priors, ZeST works in a zero-shot manner without needing dataset finetuning. Unlike some contemporary works [50] that implicitly capture material properties using several material images, ZeST only needs a single material exemplar image to transfer the material in pixel space. + +- No explicit 3D, illumination or materials. With 2D depth and segmentation estimation (which are readily available these days) and implicit material transfer, we eliminate the need for explicit specification of 3D meshes, illumination or material properties (say, in terms of BRDF). + +- Several downstream applications. Given the simplistic and practical nature of our approach, ZeST can be used for several downstream graphics applications such as applying pre-designed materials to real-world images, editing multiple object materials in a single image, and perform lighting-aware material transfer given untextured mesh renderings. + +# 2 Related Work + +Diffusion Models. Denoising Diffusion Probabilistic models have emerged as the state-of-the-art for class-conditional and text-prompt conditioned image generation [18, 23-27, 43]. These models generate photorealistic images with exemplary geometry, materials, illumination, and scene composition. The models have been extended to be conditioned on input images for computational photography tasks such as super-resolution, style transfer, and inpainting. + +Further work demonstrate controllable generation conditioned on text-based instructions [8,20,22,46], semantic segmentation [4], bounding box [11,30,47,48], depth [6,53], sketch [34,51], and image prompt [49]. Prompt-to-prompt and Prompt+ edit the input image by performing inversion followed by the introduction of new terms and reweighting the effect of terms in the input prompt [22,46]. InstructPix2Pix performs edits an input image conditioned on an instruction [7]. Ge et al. proposed rich text based image editing allowing for style assignment and specific description to specific terms in the prompt [20]. While these methods edit the image semantically and high-level descriptions, assigning specific materials using text-based approach is challenging since text acts as a limiting modality for describing textures. + +A collection of reference images can be used to learn concepts which can be further included in text prompts to generate images with the learned concepts [12, 29, 40]. Spatial modalities such as depth and sketches have been used for controlling the generated images [34, 49, 51]. Pre-trained text-to-image models can be leveraged for 3D-aware image editing using language and depth cues [13, 33, 35]. The use of ControlNet has been extended by Bhat et al. to use depth for controlling the scene composition while maintaining other scene attributes [6]. Object orientation, illumination, and other object attributes can be controlled in a continuous manner using ControlNet and learned continuous tokens embedding the 3D properties [13]. + +Material acquisition and editing. Material acquisition and editing is an active field of research taking into account illumination and object geometry. Previous work has demonstrated material acquisition under known illumination conditions and camera [2,3,17]. Such acquisition in the wild requires localizing objects with similar materials, which has been facilitated by supervised material segmentation and leveraging pre-trained vision representation backbones [5,31,42,45]. Khan et al. introduced in-image material editing using estimates of depth [28]. Recent works have employed generative adversarial networks [21] for perceptual material editing [16, 44] and physical shader-based editing using text-to-image models [41]. The use of generative models has been extended to explicitly learning materials [32] and texturing 3D meshes [9, 10, 39, 50]. + +In our work, we aim to use pre-trained image generation diffusion models to perform exemplar-based material transfer from a single image. We aim to use ControlNet and IP-adapter to perform material transfer in a zero-shot way without any training. + +# 3 Method + +In this section, we describe our method ZeST that performs exemplar-based material transfer. Recent methods perform the related problem of texture synthesis on meshes [39,50] by finetuning a diffusion model on 3-5 material exemplar images to capture the texture/material in the latent space. On the contrary, ZeST only requires a single material exemplar image and a single input image, accomplishing material transfer in a zero-shot, training-free manner. + +# 3.1 Problem Setting + +Given a material exemplar image $M$ and an input image $I$ , we aim to output an edited image $I_{gen}$ from $I$ by transferring the material from the material exemplar to the object in the input image while preserving other object and scene properties (e.g. object geometry, background, lighting etc.). Performing this task requires understanding the material, geometry, and illumination from both the exemplar and the input image. + +In practice, estimating all the aforementioned object-level properties and further isolating material information explicitly from $M$ is challenging since these + +![](images/c9ffb1be6ad5f4a0031561bda8ba79984da35379db8ace660683b4fa68fc9eaa.jpg) +Fig. 2: ZeST Architecture. Given a material exemplar $M$ and an input image $I$ , we first encode material exemplar with an image encoder (e.g., IP-Adaptor). Concurrently, we convert the input image into a depth map $D_I$ and a foreground-grayscale image $I_{init}$ to feed into the geometry and latent illumination guidance branch, respectively. By combining the two sources of guidance with the latent features from the material encoding, ZeST can transfer the material properties onto the object in input image while preserving all other attributes. + +properties are entangled in the pixel space. Therefore, we propose to tackle this problem in the latent space of diffusion models. Specifically, we aim to extract a latent representation $z_{M}$ containing the material and texture information that we can then inject into a generative diffusion model $S$ to generate $I_{gen}$ . + +# 3.2 ZeST Overview + +Since there exists no synthetic/real image dataset to supervise the learning of a 2D-to-2D material transfer, we perform the material transfer in a zero-shot training-free manner. We first break down this complex task into sub-problems of (1) encoding the material exemplar, (2) geometry-guided image editing, and (3) making the generation process illumination-aware. Given the recent advances in high-fidelity diffusion models and complementary adapters for image generation, we leverage existing pre-trained modules to tackle each of the sub-problems that together compose our pipeline to perform image-prompted material editing. + +Figure 2 presents an overview of our pipeline, which comprises three branches to guide the material, geometry, and lighting information, respectively. The Material Encoding branch takes the material exemplar image $M$ as input, which is processed by the image encoder to obtain a material latent representation $z_{M}$ . + +Concurrently, we feed the input image $I$ into Geometry Guidance and Latent Illumination Guidance Branch. The Geometry Guidance branch computes the depth map $D_I$ for the image $I$ , which is used as the input to ControlNet. The Latent Illumination Guidance branch computes a foreground mask $F$ using $I$ and creates a foreground-grayscale image $I_{init}$ , which we use as input to the + +![](images/dc6935d2a561cbfe10c6dad61839dae2bac28938bf7d8f47760b768d9c3628a7.jpg) +Material Exemplar + +![](images/1b944240321436b305e779b9ec8c5e4d64a1e884bc7216c713f967edd8fd8585.jpg) +Input Image + +![](images/6bd85701878cebfeba320524d9e9e75a29c3f0d34527ae6f86e5039f98770de3.jpg) +Estimated Depth (Optional) +Fig. 3: The design choice of IP-Adaptor with ControlNet. Given the material exemplar and the input image, we dive into the different choices of utilizing the IP-Adaptor. In particular we realize that an $\mathrm{Img2Img + }$ text module (a) wouldn't properly transfer the materials properly to the main object. On the other hand, ControlNet (b) will preserve the geometry information of the given input. We thus utilize this as the starting point for geometry guidance to further explore the best illumination cues. + +![](images/22b4268e020d42b8c2fbee4e6eb6af9622fe89df468937ea572597296f609790.jpg) +IP-Adaptor Combinations +(a) $\mathrm{Img2Img + Text}$ + +![](images/3607dc967fec1149fcf655e858161e3f7399f396cd0b21c31fdf4f0527739638.jpg) +(b) ControlNet Model + +Diffusion Inpainting pipeline. We concatenate the embeddings from ControlNet with the inpainting diffusion model at the corresponding and inject the material embedding $z_{M}$ through the cross-attention. The output of the inpainting diffusion model, $I_{gen}$ , with the edited image containing the object in $I$ cast with material from exemplar image $M$ . + +Our design choices to facilitate computation of material embedding, geometry guidance, and illumination cues are discussed in the following sections. + +# 3.3 Encoding Material Exemplar + +Given the material exemplar image $M$ , this branch encodes the image into a latent representation while preserving its material properties. Previous works [39, 50] address this by finetuning a text-to-image diffusion model to encode the image into a rare token, implicitly treating the rare token as a latent representation that can be used in conjunction with other texts for image generation. However, this approach of optimizing for the material token requires the time-consuming step for every new material exemplar and usually requires 3-5 images to prevent overfitting. + +We draw inspiration from the recently introduced IP-Adapter [49]. The IP adapter uses a CLIP image encoder to extract image features that can be injected into a diffusion model via the cross-attention layers. These features can be used as an additional condition to guide text prompts or other mediums for the generation. For example, one can input an image of a person and then describe "on the mountain" with text to obtain an image of the person in the mountains. + +However, we realize that IP-Adaptor does not work well when combined with an Img2Img pipeline, as shown in Figure 3 (a) for our task. Moreover, adding text guidances like "changing the apple texture to golden bowl" does not produce photorealistic output and does not preserve other scene information (i.e. background). This problem of geometry and material entanglement within material embedding $z_{M}$ remains unsolved, thus motivating the need for geometry and illumination guidance. + +# 3.4 Geometry Guidance via Depth Estimation + +Since decoupling geometry and material properties in images is challenging and requires additional training data, we provide an alternative solution where we enforce a stronger geometry prior to the diffusion model to overwrite the structural information present in $z_{M}$ . To this end, we adopt a depth-based ControlNet to provide geometry guidance from the input image $I$ . We observe that the geometry information from the depth map $D_{I}$ overwrites the geometry information encoded in the $z_{M}$ (see Figure 3 (b)). Note that with the geometry enforced by using depth-based ControlNet, we can successfully transfer the golden material of the bowl to the apple. + +While the use of ControlNet with IP-Adaptor is introduced in the original IP-Adaptor paper [49], we employ it for a different purpose contrary to applying new structural control over an object in the image (e.g., changing a person's pose). After extensively comparing various components for encoding the material exemplar and input image (analysis in Section 4.2), we find the depth-based guidance from pre-trained ControlNet helps us preserve the original geometry of the object for the task of material transfer. + +While the addition of ControlNet helps preserve the geometry, we observe that the results suffer from inconsistency in preserving the illumination and background from the input image. This is evident in Figure 3, where the background and the lighting changes differ from the input. + +# 3.5 Latent-space Illumination Guidance + +Our final branch is primarily responsible for preserving the illumination and background in the input image. We propose two-fold guidance for illumination in the latent space during generation - an inpainting module and a foreground decoloring process. In addition to the attached IP-Adaptor and ControlNet, we adopt an inpainting diffusion model $S$ instead of a standard generator. Specifically, our ControlNet-inpainting procedure takes in four conditions for image generation: + +$$ +I _ {g e n} = \mathcal {S} \left(z _ {M}, D _ {I}, I _ {\text {i n i t}}, F\right), \tag {1} +$$ + +where $z_{M}$ is the material encoding, $D_{I}$ is the depth map computed for input image $I$ , $I_{init}$ is the initial image to denoise from, and $F$ is the foreground mask of target object in $I$ which we are editing. + +We conduct an ablation on the various versions of $I_{init}$ , as shown in Figure 4. Specifically, we test out the following settings: (1) using the original input image, (2) initializing the foreground with random noise, and (3) using the foreground grayscale image. Intuitively, directly letting $I_{init} = I$ (Setting (1)) would be a preferable option as $I$ encompasses implicit lighting information (from the object's shading and the surrounding environment) while conveniently enforces all other parts of the image other than the object to remain the same. In practice, however, we found that using the original image inevitably introduces a strong prior of the base color from the input object (e.g. orange color of pumpkin), which would be entangled with the material base color from $M$ in the output + +![](images/dfb0ba38f010079d23d3006d67be07164813a0db110694a17879009e1741743a.jpg) +Fig. 4: Ablating input for illumination guidance. To validate our design choice of the foreground-grayscale image for initializing inpainting, we compare the generated results against using the original image and random noise as inputs. The original image presents a strong base color prior that perturbs the generation, while the random image neglects shading information, leading to wrong lighting in both examples. + +![](images/bd3ab1897a25bc22014e0737d3919a802c5920777df6996f1bfca69166cddb85.jpg) + +image. This artifact is sustained even when we significantly extend the number of denoising steps. On the other hand, when initializing $I_{init}$ with random noise, the method indeed removes the base color prior but also removes the shading information causing incorrect illuminations in the synthesized object (e.g., the left side of the synthesized pumpkin is darker, but light is coming from the left). In our proposed pipeline, we perform grayscale operations in the pixel space for the object region (3). This provides a balanced solution of removing the strong color priors from the input image while keeping the shading cues for the inpainting diffusion model. + +Thus, we propose to initialize $I_{init}$ as: + +$$ +I _ {\text {i n i t}} = F \odot I _ {\text {g r a y}} + (1 - F) \odot I, \tag {2} +$$ + +which converts the color of foreground object in the image to grayscale. $(1 - F)\odot I$ implicitly preserves the lighting direction, intensity, and color information, and $F\odot I_{gray}$ preserves the object's shading information without base color prior. + +# 3.6 Implementation Details + +We implement our method using Stable Diffusion XL Inpainting [36] with the corresponding version of depth-based ControlNet [51] and IP-Adaptor [49]. We use Dense Prediction Transformers for depth estimation [38] and $\mathrm{Rembg}^1$ for foreground extraction. Our method is implemented in PyTorch and runs on a single Nvidia A-10 GPU with 24 GB of RAM. For all Dreambooth approaches, we use the official LoRA-Dreambooth provided by Diffusers. + +# 4 Experiments + +We evaluate the efficacy of our method against various baselines. We also present several examples of downstream applications using our method. + +![](images/6178885b58e0aebe464438f43e9c3250ffa0e9a3880bac37227b27afca8c6b0c.jpg) + +![](images/2d0fc42514cc3fbc8bd7bf0be413ae7c1d2c538cc0318303c56a676653ac0d22.jpg) + +![](images/76b79bb3a122c5a49ecc392b43b15a82d113f1b4cccbc1c3be77833fafad970b.jpg) + +![](images/1e8c4e41f24bd08f92dc4069b7f70da0ae9e1c95f725a6e98bea54d1d91cd1b7.jpg) + +![](images/2e143f320104a5c8a57d4e2dc3ed1e482e8eb5da770c0cda4f4268012aea2ffa.jpg) + +![](images/0445dbb71d03599314859e6f8a6c286195d1164eca21031cd037659f19aa8afe.jpg) + +![](images/6f3c47923f0b4dafc07d9fc88a650e75ea78daeedf83e51d43c259a325a22dc0.jpg) + +![](images/93803b5b6995bc32b9adab9fbb113e7a290073a76843ee23e5dba0eb6786fe92.jpg) + +![](images/186649ce998e725587dfee773e43632daebb768cd49311e183e748da0cca013f.jpg) + +![](images/531ebe7a8d83b8995883edd1b18b40520c6628c0171f9169a097658adbc2bf17.jpg) + +![](images/0dbe4f708f07dfe47e469f0d00252c9e37ba9cd3a46eab4ca69611c995eae68c.jpg) + +![](images/b6a8f5b397dcf37daceefe3534854ae8048b8a4051b13ed7c04b52130db20368.jpg) + +![](images/8c32fd0fca2c10a4c927731af791fb25aa213d84db7559f2b05df532f66af22b.jpg) +Fig. 5: Qualitative results on diverse materials. We present results of material transfer from a diverse set of material exemplar images. Even when perturbed by lighting and complex geometry, ZeST can still isolate the material information from the exemplar image and transfer to various objects while preserving the original geometry and illumination conditions. Note the change in specular regions as shinier materials are chosen in the case of the car made of brass and the dinosaur made of shiny steel. + +![](images/6df8986d7a2c797643fc9f431d4ea05abe77b8551f0173a5957fbd6cafa9aabf.jpg) + +![](images/7fd1b6458b6c16d311d0465d6b99f3421dcf8389233a81fa292fa46e79f937e3.jpg) + +![](images/5ef8b5d0b27fa8b2b4474859f925f23eedf603952baab612f5de50bff3fc532e.jpg) + +# 4.1 Datasets + +As the first to propose this problem, we create two datasets for comparison and evaluation. The real-world datasets provide us an understanding of our model's robustness, while the synthetic dataset is used for standard quantitative metrics. + +Real-World Dataset. We curate a dataset comprising of 30 diverse material exemplars and 30 input images, collected from copyright-free image sources (i.e. Unsplash) and images generated by DALLE-3. All of these images are object-centric, where there exists a main object in the foreground to which we are extracting the material from or applying the material onto. + +Synthetic Dataset. To perform quantitative evaluation, we use Blender to create a synthesized dataset of 9 materials randomly initialized by adjusting the base color, metallic, and roughness, and 20 meshes of different categories from Objaverse [15] rendered at three random viewpoints each, generating 540 ground-truth renderings. We render spheres assigned with each material individually and use the rendered image the material exemplar and pre-textured mesh rendering as input for all methods. + +While $ZeST$ is completely training-free, other methods of learning materials (e.g., Dreambooth) require further fine-tuning for every exemplar given. This + +![](images/866749292d78fd8febbf034a984acceac97a6c8424f47742c294fa09e6cadf35.jpg) +Fig. 6: Qualitative comparisons against baselines. Given the material exemplar and input image in the first column, we compare our method to five different baselines. Without any geometry guidance, all image editing baselines fail to impose the correct geometry of the input image. On the other hand, using Dreambooth with our geometry and illumination guidance often contains albedo shifts, potentially due to information loss when encoding material properties into a word token. + +makes it infeasible to scale up the two datasets. Both our datasets are of comparable sizes to previous works on finetuning diffusion models [40, 50]. + +# 4.2 Qualitative Results + +Material transfer results on real images. To demonstrate the application of ZeST on a wide range of materials and objects, we present examples of material transfer in Figure 5. The first three rows present results on real-world images, while the fourth row shows results using PBR materials [1]. Based on the examples, we observe that the material is properly disentangled from the geometry in the material exemplar and follows the shape of the object in the input image. This is particularly evident in the results of the orange, frog, and Groot toy figure, where the material is completely flat. We also notice accurate shadings in the bust and table examples when comparing them against their inputs. In the car and toy dinosaur examples, the reflections from the exemplars are isolated from the textural patterns and cast reasonably based on the illumination cues. + +Qualitative comparisons. Since our work is the first to perform material transfer in latent space, we modified existing methods to compare against. Specifically, since existing image-guided texture synthesis methods utilize Dreambooth for their first step to encode the textures from images into word tokens [14,39,50], + +we set Dreambooth as the backbone for learning material properties and combine with text-guided image editing techniques for comparison, including MasaCtrl and Instruct-Pix2Pix, and using ZeST but swapping out the IP-Adaptor with text. While our method is training-free, Dreambooth requires finetuning for every material exemplar given. We also explore alternative options to combine with IP-Adaptor, including text-guided inpainting and Instruct-Pix2Pix with the prompt "Change the texture of the object". + +We present qualitative comparisons against the baselines on four exemplar and input images in Figure 6. By using Inpainting with Text prompt instead of ControlNet, the model ignores the geometry of the original input when casting the materials. In both cases when using Instruct-Pix2Pix (with IP-Adaptor or Dreambooth), the geometry of all objects is better preserved, but the model fails to capture the material property from the material exemplar image. The combination of Dreambooth and MasaCtrl fails to preserve the geometry of the object in the input image and misattributes the material. The closest baseline to ours is Dreambooth with our proposed geometry and illumination guidance; however, we observe that the word encoding process results in some information loss as evident in the color shifts of the backpack and the astronaut figure. Furthermore, the method requires additional training for every material exemplar, whereas ZeST takes roughly 15 seconds to generate the image. + +Our method, ZeST, performs the task effectively by retaining the object geometry, scene illumination, and attributing the material correctly. Additionally, note that ZeST adapts to more challenging material exemplar images, such as transparent materials (glass cup in Figure 6 Row 3) and images with other minor objects (additional hand in Figure 6 Row 4). + +# 4.3 Quantitative Comparisons + +We follow previous work [41, 50] and use the synthetic images to compare all methods in terms of PSNR, LPIPS [52], and CLIP similarity score [37] against ground truth renderings. We also incorporate another DreamSim [19], a more recent metric that is more similar to human references. We grab IP-Adaptor + Instruct-Pix2Pix and Dreambooth + our geometry and illumination guidance as baselines, as they are the strongest (and only) performers from our qualitative comparisons that can roughly edit the material based on the geometry. + +Table 1 (left) presents our results. We see a dramatic improvement when shifting from the instruct-pix2pix pipeline to our geometry and illumination guidance. While using Dreambooth performs similarly to our IP-Adaptor in the synthetic dataset, it requires a fine-tuned model for each material exemplar, making it unfeasible to scale up. In addition, we show in the next section that our method excels in real-world datasets. + +- **User Study.** We also create a user study with 16 participants to understand the capability of our model given real-world materials tested on real images. Each subject is shown 5 random samples from the 900 combinations generated from the dataset with our method and against the two strongest baselines: Dreambooth + ControlNet-Inpainting and IP-Adaptor + Instruct-Pix2Pix. We ask + +Table 1: Quantitative Comparisons and User Study. We grab the strongest baselines in our qualitative comparisons for additional studies. Left: We measure the PSNR, LPIPS [52], CLIP similarity score [37], and DreamSim [19] in a quantitative study on the synthetic dataset of 540 exemplar-input combinations. Right: We perform a user study to evaluate the material fidelity and photorealism of the edited images from each method. We randomly sample 5 out of 900 real-world exemplar-input combinations for each of the 16 participants. + +
PSNR↑LPIPS↓CLIP↑DreamSim↓Fidelity↑Photorealism↑
IP-Adaptor + Instruct-Pix2Pix17.080.0990.7400.390IP-Adaptor + Instruct-Pix2Pix1.48
DB + Our Geo/illum. Guidance25.520.0580.8740.238DB + Our Geo/illum. Guidance3.25
Ours25.590.0530.8830.198Ours4.05
+ +![](images/03d604531205a5909388ad7508cf0ff37bac1bb7093e4759877b8ae88b282597.jpg) +Fig. 7: Robustness to lighting and object pose. We present two types of robustness testing. (a): Robustness to changing the material exemplar lighting and pose. (b): Zooming into the material exemplar. Our model yields highly similar results in both, showing the capability to adapt to these external changes. + +![](images/cb49cb9c9605a9a92ac337bb38c19cc241d7c49656a3731416fda08b5411ae9a.jpg) + +each subject to rate each image from 1 to 5 based on (1) material fidelity: how close the material in the generated image is compared to the original exemplar and (2) photorealism: how realistic the generated image is. Our results are summarized in Table 1 (right). + +Our results show significant improvements from the two baselines in both material fidelity and photorealism of the edited image. The score improvements are also greater in real-world scenarios compared to synthetic ones. This could be the result of information loss during finetuning and overfitting to the exemplar background, which is less significant under controlled synthetic scenarios. + +# 4.4 Robustness of the Model + +In addition to the diverse set of results presented in Figure 5, we extensively test out the behavior of ZeST with special cases of material exemplar images. + +Relighting and rotating the object in the material exemplar image. A good material extractor should be agnostic to small lighting and rotation changes of the same object used as the material exemplar. To evaluate this, we render a random material and cast it onto an irregular-shaped pumpkin (another example is in the Appendix). We then render three samples of the pumpkin, a default lighting orientation, a change in lighting direction pitch by 120 degrees, and a random rotation, as shown in 7 (a). The transferred materials onto the dolphin + +![](images/f39176c7e08a26ed040d85404e355880ce6e9ad7c4f78e0c132486ae8f94f358.jpg) +Fig. 8: Multiple Material Transfers in a Single Image. By replacing the foreground extraction with an open-vocabulary segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be applied iteratively to cast different material properties to different objects in a single RGB image. + +![](images/3480b3bf84ce0fab480ff7cf033fb974d797600a4e79a1f3d9370c347579539d.jpg) + +![](images/ac3d6242c2d56fd8cff2a791c5dd6db9e50c30964736d667ad2e566cc244c101.jpg) +Fig.9: Lighting-aware Image Editing. Given a rendering of a textured mesh, we can alter $ZeST$ slightly to achieve lighting-aware material edit. It can be seen from both examples where the reflection can be disentangled from the object texture. + +remain roughly consistent across all samples, showing that our method is fairly resistant to these changes at a small scale. + +Effect of image scale of material exemplar image. To examine the effect of the scale of the material exemplar, we first use an image of a woolen cloth material with a distinctive repeating pattern and apply our method to an image of a chair. Then, we zoom into the exemplar image manually to the edge only very few repeated patterns are left. Our results in Figure 7 (b) show that while the scale of the material is drastically different, the model automatically re-adjusts the patterns into a reasonable size to be cast onto the input image. + +# 4.5 Applications + +Applying multiple materials to multiple objects. By replacing the foreground extraction with a segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be used to iteratively change multiple materials in a single image. Figure 8 presents two examples of editing multiple objects in a single image. As evident in the transparent glass chair where the wooden table behind is roughly visible, ZeST generalizes to complex scenes with multiple objects. + +Lighting-aware Material Transfer. Given a material exemplar image and an untextured mesh rendered under multiple illumination conditions, $ZeST$ can also perform lighting-aware material transfer. Specifically, we first generate the + +![](images/2bf2c1a730e8725f327018ba985c9482e8a8ecc413c023c737abf0387ae84527.jpg) +Fig. 10: Limitations. Our method primarily fails in two modes. (a) The model sometimes picks the most "probable" areas to transfer the material, instead of casting the material on the entire object. (b) If two textures are present in the exemplar image (e.g., foreground and background of the tennis ball, the glazed top and bottom logo of the cup), the model sometimes combine both materials when performing the edit. + +![](images/853490493141d119b79cb7ae57133a2f1ebedb10f4b4f16f3f15045c2665a7b8.jpg) + +materials and textures of the image under Lighting 1 using ZeST. Then, by fixing the same seed during generation and using the generating image given the first lighting as the input to the second, we can enforce consistency in the material and texture generated (details of implementation in Appendix) while changing the reflections. We show examples of transferring the glazed cup material to two mesh renders in Figure 9. ZeST successfully disentangles the reflections while keeping most textural patterns consistent between the two images. This technique could potentially be applied jointly with other 3D texture synthesis works [10] and be helpful to applications such as e-commerce design. + +# 4.6 Limitations + +Since $ZeST$ operates majorly in the latent space, the model sometimes exhibits uncontrollable behaviors based on its image understanding. Figure 10 presents two forms of more frequent failure cases: (a) Partial material transfer: the material is only transferred to parts instead of the entirety of the object. We hypothesize that the failure stems from the entanglement of material properties and the exemplar's identity, as the material is only applied to where it seems the most probable (e.g., only apply the jacket material to the statue's body). (b) Blending multiple materials: since the current IP-Adaptor does not have a module to extract regions of an image for material transfer, $ZeST$ sometimes mixes up multiple materials in the exemplar image during transfer. + +# 5 Conclusion + +We present ZeST, a zero-shot, training-free method for exemplar-based material-editing. ZeST is built completely using readily available pre-trained models and demonstrates generalizable and robust results on real images. We curate synthetic and real image datasets to evaluate the performance of our approach. We also demonstrate downstream applications like multiple edits in a single image and material-aware relighting. ZeST serves as a strong starting point for future research in image-to-image material transfer, implying opportunities of leveraging pre-trained image diffusion models for complex graphic designing tasks. + +# References + +1. https://wwwtexts.com/browse/pbr-materials/114558 +2. Aittala, M., Weyrich, T., Lehtinen, J.: Practical svbrdf capture in the frequency domain. ACM Trans. Graph. 32(4), 110-1 (2013) +3. Aittala, M., Weyrich, T., Lehtinen, J., et al.: Two-shot svbrdf capture for stationary materials. ACM Trans. Graph. 34(4), 110-1 (2015) +4. Bar-Tal, O., Yariv, L., Lipman, Y., Dekel, T.: Multidiffusion: Fusing diffusion paths for controlled image generation (2023) +5. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3479-3487 (2015) +6. Bhat, S.F., Mitra, N.J., Wonka, P.: Loosecontrol: Lifting controlnet for generalized depth conditioning. arXiv preprint arXiv:2312.03079 (2023) +7. Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: Learning to follow image editing instructions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18392-18402 (2023) +8. Cao, M., Wang, X., Qi, Z., Shan, Y., Qie, X., Zheng, Y.: Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. arXiv preprint arXiv:2304.08465 (2023) +9. Cao, T., Kreis, K., Fidler, S., Sharp, N., Yin, K.: Texfusion: Synthesizing 3d textures with text-guided image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4169-4181 (2023) +0. Chen, D.Z., Siddiqui, Y., Lee, H.Y., Tulyakov, S., Nießner, M.: Text2tex: Text-driven texture synthesis via diffusion models. arXiv preprint arXiv:2303.11396 (2023) +1. Chen, M., Laina, I., Vedaldi, A.: Training-free layout control with cross-attention guidance. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 5343-5353 (2024) +2. Chen, W., Hu, H., Li, Y., Ruiz, N., Jia, X., Chang, M.W., Cohen, W.W.: Subject-driven text-to-image generation via apprenticeship learning. Advances in Neural Information Processing Systems 36 (2024) +3. Cheng, T.Y., Gadelha, M., Groueix, T., Fisher, M., Mech, R., Markham, A., Trigoni, N.: Learning continuous 3d words for text-to-image generation. arXiv preprint arXiv:2402.08654 (2024) +4. Corneanu, C., Gadde, R., Martinez, A.M.: Latentpaint: Image inpainting in latent space with diffusion models. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 4334-4343 (2024) +5. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A universe of annotated 3d objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13142-13153 (2023) +6. Delanoy, J., Lagunas, M., Condor, J., Gutierrez, D., Masia, B.: A generative framework for image-based editing of material appearance using perceptual attributes. In: Computer Graphics Forum. vol. 41, pp. 453-464. Wiley Online Library (2022) +7. Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., Bousseau, A.: Flexible svbrdf capture with a multi-image deep network. In: Computer graphics forum. vol. 38, pp. 1-13. Wiley Online Library (2019) +8. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021) + +19. Fu*, S., Tamir*, N., Sundaram*, S., Chai, L., Zhang, R., Dekel, T., Isola, P.: Dreamsim: Learning new dimensions of human visual similarity using synthetic data. NeurIPS (2023) +20. Ge, S., Park, T., Zhu, J.Y., Huang, J.B.: Expressive text-to-image generation with rich text. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7545-7556 (2023) +21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139-144 (2020) +22. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022) +23. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020) +24. Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M., Salimans, T.: Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research 23(1), 2249-2281 (2022) +25. Ho, J., Salimans, T.: Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022) +26. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10124-10134 (2023) +27. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022) +28. Khan, E.A., Reinhard, E., Fleming, R.W., Bülthoff, H.H.: Image-based material editing. ACM Transactions on Graphics (TOG) 25(3), 654-663 (2006) +29. Kumari, N., Zhang, B., Zhang, R., Shechtman, E., Zhu, J.Y.: Multi-concept customization of text-to-image diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1931-1941 (2023) +30. Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., Li, C., Lee, Y.J.: Gligen: Open-set grounded text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22511-22521 (2023) +31. Liang, Y., Wakaki, R., Nobuhara, S., Nishino, K.: Multimodal material segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 19800-19808 (2022) +32. Lopes, I., Pizzati, F., de Charette, R.: Material palette: Extraction of materials from a single image. arXiv preprint arXiv:2311.17060 (2023) +33. Michel, O., Bhattad, A., VanderBilt, E., Krishna, R., Kembhavi, A., Gupta, T.: Object 3dit: Language-guided 3d-aware image editing. Advances in Neural Information Processing Systems 36 (2024) +34. Mou, C., Wang, X., Xie, L., Zhang, J., Qi, Z., Shan, Y., Qie, X.: T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453 (2023) +35. Pandey, K., Guerrero, P., Gadelha, M., Hold-Geoffroy, Y., Singh, K., Mitra, N.: Diffusion handles: Enabling 3d edits for diffusion models by lifting activations to 3d. arXiv preprint arXiv:2312.02190 (2023) +36. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023) + +37. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021) +38. Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12179-12188 (2021) +39. Richardson, E., Metzer, G., Alaluf, Y., Giryes, R., Cohen-Or, D.: Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721 (2023) +40. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242 (2022) +41. Sharma, P., Jampani, V., Li, Y., Jia, X., Lagun, D., Durand, F., Freeman, W.T., Matthews, M.: Alchemist: Parametric control of material properties with diffusion models. arXiv preprint arXiv:2312.02970 (2023) +42. Sharma, P., Philip, J., Gharbi, M., Freeman, B., Durand, F., Deschaintre, V.: Materialistic: Selecting similar materials in images. ACM Transactions on Graphics (TOG) 42(4), 1-14 (2023) +43. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019) +44. Subias, J.D., Lagunas, M.: In-the-wild material appearance editing using perceptual attributes. In: Computer Graphics Forum. vol. 42, pp. 333-345. Wiley Online Library (2023) +45. Upchurch, P., Niu, R.: A dense material segmentation dataset for indoor and outdoor scene parsing. In: European Conference on Computer Vision. pp. 450-466. Springer (2022) +46. Voynov, A., Chu, Q., Cohen-Or, D., Aberman, K.: $p+$ : Extended textual conditioning in text-to-image generation. arXiv preprint arXiv:2303.09522 (2023) +47. Wang, X., Darrell, T., Rambhatla, S.S., Girdhar, R., Misra, I.: Instance-diffusion: Instance-level control for image generation. arXiv preprint arXiv:2402.03290 (2024) +48. Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., Duan, N., Liu, Z., Liu, C., Zeng, M., et al.: Reco: Region-controlled text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14246-14255 (2023) +49. Ye, H., Zhang, J., Liu, S., Han, X., Yang, W.: Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721 (2023) +50. Yeh, Y.Y., Huang, J.B., Kim, C., Xiao, L., Nguyen-Phuoc, T., Khan, N., Zhang, C., Chandraker, M., Marshall, C.S., Dong, Z., et al.: Texturedreamer: Image-guided texture synthesis through geometry-aware diffusion. arXiv preprint arXiv:2401.09416 (2024) +51. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023) +52. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586-595 (2018) +53. Zhao, S., Chen, D., Chen, Y.C., Bao, J., Hao, S., Yuan, L., Wong, K.Y.K.: Unictrlnet: All-in-one control to text-to-image diffusion models. Advances in Neural Information Processing Systems 36 (2024) \ No newline at end of file diff --git a/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/images.zip b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..94c0b42a14a44ba030cc097d91c9c4da967cb934 --- /dev/null +++ b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca14760b64ed9a87ce49daeb1e04a7bcbd9921ec38ce4004d2c413f501f6fa84 +size 563801 diff --git a/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/layout.json b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..444c14f2c40646e2dacacef4969eaa291778f95b --- /dev/null +++ b/2024/ZeST_ Zero-Shot Material Transfer from a Single Image/layout.json @@ -0,0 +1,9381 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 184, + 112, + 430, + 148 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 112, + 430, + 148 + ], + "spans": [ + { + "bbox": [ + 184, + 112, + 430, + 148 + ], + "type": "text", + "content": "ZeST: Zero-Shot Material Transfer from a Single Image" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "spans": [ + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "text", + "content": "Ta-Ying Cheng" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "text", + "content": ", Prafull Sharma" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "text", + "content": ", Andrew Markham" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "text", + "content": ", Niki Trigoni" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "text", + "content": ", and Varun Jampani" + }, + { + "bbox": [ + 181, + 167, + 432, + 194 + ], + "type": "inline_equation", + "content": "^{2}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 187, + 201, + 277, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 201, + 277, + 213 + ], + "spans": [ + { + "bbox": [ + 187, + 201, + 277, + 213 + ], + "type": "text", + "content": "1University of Oxford" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 296, + 201, + 351, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 296, + 201, + 351, + 213 + ], + "spans": [ + { + "bbox": [ + 296, + 201, + 351, + 213 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 296, + 201, + 351, + 213 + ], + "type": "text", + "content": "Stability AI" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 370, + 201, + 425, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 201, + 425, + 213 + ], + "spans": [ + { + "bbox": [ + 370, + 201, + 425, + 213 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 370, + 201, + 425, + 213 + ], + "type": "text", + "content": "MIT CSAIL" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 147, + 240, + 299, + 434 + ], + "blocks": [ + { + "bbox": [ + 147, + 240, + 299, + 434 + ], + "lines": [ + { + "bbox": [ + 147, + 240, + 299, + 434 + ], + "spans": [ + { + "bbox": [ + 147, + 240, + 299, + 434 + ], + "type": "image", + "image_path": "0ccf3b5fbb8ae568fe5a4d284565a4ac400feb4430884775af6f00cd5777436e.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 442, + 482, + 487 + ], + "lines": [ + { + "bbox": [ + 130, + 442, + 482, + 487 + ], + "spans": [ + { + "bbox": [ + 130, + 442, + 482, + 487 + ], + "type": "text", + "content": "Fig. 1: Overview. We present ZeST, a zero-shot single-image approach to (a) transfer material from an exemplar image to an object in the input image. (b) ZeST can easily be extended to perform multiple material edits in an single image, and (c) perform implicit lighting-aware edits on rendering of a textured mesh." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 310, + 240, + 465, + 434 + ], + "blocks": [ + { + "bbox": [ + 310, + 240, + 465, + 434 + ], + "lines": [ + { + "bbox": [ + 310, + 240, + 465, + 434 + ], + "spans": [ + { + "bbox": [ + 310, + 240, + 465, + 434 + ], + "type": "image", + "image_path": "13158a1908a34fa45622aa16676426695e899ff5f634eccbada0e3b622f65912.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 159, + 523, + 453, + 654 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 523, + 453, + 654 + ], + "spans": [ + { + "bbox": [ + 159, + 523, + 453, + 654 + ], + "type": "text", + "content": "Abstract. We propose ZeST, a method for zero-shot material transfer to an object in the input image given a material exemplar image. ZeST leverages existing diffusion adapters to extract implicit material representation from the exemplar image. This representation is used to transfer the material using pre-trained inpainting diffusion model on the object in the input image using depth estimates as geometry cue and grayscale object shading as illumination cues. The method works on real images without any training resulting a zero-shot approach. Both qualitative and quantitative results on real and synthetic datasets demonstrate that ZeST outputs photorealistic images with transferred materials. We also show the application of ZeST to perform multiple edits and robust material assignment under different illuminations." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 160, + 654, + 351, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 654, + 351, + 665 + ], + "spans": [ + { + "bbox": [ + 160, + 654, + 351, + 665 + ], + "type": "text", + "content": "Project Page: https://ttchengab.github.io/zest" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 229, + 127 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 229, + 127 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 229, + 127 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 139, + 481, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 139, + 481, + 270 + ], + "spans": [ + { + "bbox": [ + 130, + 139, + 481, + 270 + ], + "type": "text", + "content": "Editing object materials in images (e.g., changing a marble statue into a steel statue) is useful for several graphics and design applications such as game design, e-commerce, etc. It is a highly challenging and time-consuming task even for expert artists and graphic designers - typically requires explicit 3D geometry and illumination estimation followed by careful tuning of the target material properties (e.g., metallic, roughness, transparency). Previous works try to alleviate the tedious material specification by synthesizing textures given input text prompts [39,50]. However, they are focused on texturing 3D meshes, which overlooks some of the unique challenges for material editing in 2D images, such as illumination. Another work [41] proposes fine-grained material editing on images, but it cannot directly transfer materials from a given exemplar." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 270, + 481, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 270, + 481, + 366 + ], + "spans": [ + { + "bbox": [ + 130, + 270, + 481, + 366 + ], + "type": "text", + "content": "In this work, we aim to make 2D-to-2D material editing practical by eliminating the need for any 3D objects as well as explicit specification of material properties. Given a single image of an object and another material exemplar image, our goal is to transfer the material appearance from the exemplar to the target object directly in 2D. See Fig. 1 for some sample input and material exemplar images. We do not assume any access to the ground-truth 3D shapes, illumination, or even the material properties, making this problem setting practical and widely applicable for material editing." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 366, + 481, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 366, + 481, + 462 + ], + "spans": [ + { + "bbox": [ + 130, + 366, + 481, + 462 + ], + "type": "text", + "content": "This setup is particularly challenging from two perspectives. First, an explicit approach to material transfer requires an understanding of many object-level properties in both the exemplar and the input image, such as geometry and illumination. Subsequently, we have to disentangle the material information from these properties and apply it to the new image; the entire process has several unsolved components. Second, there currently exists no real-world datasets for supervising this task. Collecting high-quality datasets presenting the same object with multiple materials and exemplars may be quite tedious." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 462, + 481, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 462, + 481, + 533 + ], + "spans": [ + { + "bbox": [ + 130, + 462, + 481, + 533 + ], + "type": "text", + "content": "One of the main contributions of this work in alleviating these challenges is a zero-shot approach that can implicitly transfer arbitrary material appearances from a given 2D exemplar image onto a target 2D object image, without explicitly estimating any 3D or material properties from either image. We call our approach 'ZeST', as it does not require multiple exemplars or any training like previous works, making it easy to generalize to any images in the wild." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 533, + 481, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 481, + 629 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 481, + 629 + ], + "type": "text", + "content": "With ZeST, we propose a carefully designed pipeline that repurposes several recent advances in 2D image generation and editing for our problem setting. At a high level, we adapt the geometry-guided generation (e.g., ControlNet [51]) and also exemplar-guided generation (e.g., IP-Adapter [49]) to implicitly isolate and transfer material appearance from a source exemplar to the target image while applying a foreground decolored image and inpainting for illumination cues. Our key contribution is presenting a simple pipeline with careful design choices that can be used to tackle a highly challenging problem of 2D-to-2D material transfer." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 629, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 629, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 629, + 481, + 665 + ], + "type": "text", + "content": "Since this is a new problem setting, we created both synthetic and real-world evaluation datasets with material exemplars and object images. Extensive qualitative and quantitative evaluations demonstrate that ZeST excels in photo-" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "content": "realism and material accuracy in the output images when compared against various baselines while being completely training-free. See Fig. 1(a) for sample results of ZeST. With our pipeline, artists can grab pre-designed materials as material exemplars and directly transfer them to real-world images. By using different object masks, we can also use ZeST to cast different materials to multiple objects present in a single image (Fig. 1 (b)). In addition, with slight alteration of the inputs, ZeST can perform light-aware material transfer by changing the reflections while keeping textural patterns consistent (Fig. 1 (c)); this method can have potential application when used in conjunction with 3D texture generation methods [10]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 146, + 236, + 463, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 236, + 463, + 248 + ], + "spans": [ + { + "bbox": [ + 146, + 236, + 463, + 248 + ], + "type": "text", + "content": "In summary, " + }, + { + "bbox": [ + 146, + 236, + 463, + 248 + ], + "type": "inline_equation", + "content": "ZeST" + }, + { + "bbox": [ + 146, + 236, + 463, + 248 + ], + "type": "text", + "content": " has several favorable properties for material editing:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 255, + 481, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 255, + 481, + 315 + ], + "spans": [ + { + "bbox": [ + 138, + 255, + 481, + 315 + ], + "type": "text", + "content": "- Zero-shot, training free, single-image material transfer. By leveraging 2D generative priors, ZeST works in a zero-shot manner without needing dataset finetuning. Unlike some contemporary works [50] that implicitly capture material properties using several material images, ZeST only needs a single material exemplar image to transfer the material in pixel space." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 139, + 315, + 481, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 315, + 481, + 361 + ], + "spans": [ + { + "bbox": [ + 139, + 315, + 481, + 361 + ], + "type": "text", + "content": "- No explicit 3D, illumination or materials. With 2D depth and segmentation estimation (which are readily available these days) and implicit material transfer, we eliminate the need for explicit specification of 3D meshes, illumination or material properties (say, in terms of BRDF)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 139, + 362, + 481, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 362, + 481, + 422 + ], + "spans": [ + { + "bbox": [ + 139, + 362, + 481, + 422 + ], + "type": "text", + "content": "- Several downstream applications. Given the simplistic and practical nature of our approach, ZeST can be used for several downstream graphics applications such as applying pre-designed materials to real-world images, editing multiple object materials in a single image, and perform lighting-aware material transfer given untextured mesh renderings." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 438, + 237, + 451 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 438, + 237, + 451 + ], + "spans": [ + { + "bbox": [ + 132, + 438, + 237, + 451 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 462, + 482, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 462, + 482, + 533 + ], + "spans": [ + { + "bbox": [ + 130, + 462, + 482, + 533 + ], + "type": "text", + "content": "Diffusion Models. Denoising Diffusion Probabilistic models have emerged as the state-of-the-art for class-conditional and text-prompt conditioned image generation [18, 23-27, 43]. These models generate photorealistic images with exemplary geometry, materials, illumination, and scene composition. The models have been extended to be conditioned on input images for computational photography tasks such as super-resolution, style transfer, and inpainting." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": "Further work demonstrate controllable generation conditioned on text-based instructions [8,20,22,46], semantic segmentation [4], bounding box [11,30,47,48], depth [6,53], sketch [34,51], and image prompt [49]. Prompt-to-prompt and Prompt+ edit the input image by performing inversion followed by the introduction of new terms and reweighting the effect of terms in the input prompt [22,46]. InstructPix2Pix performs edits an input image conditioned on an instruction [7]. Ge et al. proposed rich text based image editing allowing for style assignment and specific description to specific terms in the prompt [20]. While these methods edit the image semantically and high-level descriptions, assigning specific materials using text-based approach is challenging since text acts as a limiting modality for describing textures." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "content": "A collection of reference images can be used to learn concepts which can be further included in text prompts to generate images with the learned concepts [12, 29, 40]. Spatial modalities such as depth and sketches have been used for controlling the generated images [34, 49, 51]. Pre-trained text-to-image models can be leveraged for 3D-aware image editing using language and depth cues [13, 33, 35]. The use of ControlNet has been extended by Bhat et al. to use depth for controlling the scene composition while maintaining other scene attributes [6]. Object orientation, illumination, and other object attributes can be controlled in a continuous manner using ControlNet and learned continuous tokens embedding the 3D properties [13]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 236, + 482, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 236, + 482, + 368 + ], + "spans": [ + { + "bbox": [ + 130, + 236, + 482, + 368 + ], + "type": "text", + "content": "Material acquisition and editing. Material acquisition and editing is an active field of research taking into account illumination and object geometry. Previous work has demonstrated material acquisition under known illumination conditions and camera [2,3,17]. Such acquisition in the wild requires localizing objects with similar materials, which has been facilitated by supervised material segmentation and leveraging pre-trained vision representation backbones [5,31,42,45]. Khan et al. introduced in-image material editing using estimates of depth [28]. Recent works have employed generative adversarial networks [21] for perceptual material editing [16, 44] and physical shader-based editing using text-to-image models [41]. The use of generative models has been extended to explicitly learning materials [32] and texturing 3D meshes [9, 10, 39, 50]." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 368, + 482, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 368, + 482, + 416 + ], + "spans": [ + { + "bbox": [ + 130, + 368, + 482, + 416 + ], + "type": "text", + "content": "In our work, we aim to use pre-trained image generation diffusion models to perform exemplar-based material transfer from a single image. We aim to use ControlNet and IP-adapter to perform material transfer in a zero-shot way without any training." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 434, + 202, + 446 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 434, + 202, + 446 + ], + "spans": [ + { + "bbox": [ + 132, + 434, + 202, + 446 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 459, + 482, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 459, + 482, + 533 + ], + "spans": [ + { + "bbox": [ + 130, + 459, + 482, + 533 + ], + "type": "text", + "content": "In this section, we describe our method ZeST that performs exemplar-based material transfer. Recent methods perform the related problem of texture synthesis on meshes [39,50] by finetuning a diffusion model on 3-5 material exemplar images to capture the texture/material in the latent space. On the contrary, ZeST only requires a single material exemplar image and a single input image, accomplishing material transfer in a zero-shot, training-free manner." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 549, + 244, + 562 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 549, + 244, + 562 + ], + "spans": [ + { + "bbox": [ + 132, + 549, + 244, + 562 + ], + "type": "text", + "content": "3.1 Problem Setting" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "text", + "content": "Given a material exemplar image " + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "text", + "content": " and an input image " + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "text", + "content": ", we aim to output an edited image " + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "inline_equation", + "content": "I_{gen}" + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 570, + 482, + 641 + ], + "type": "text", + "content": " by transferring the material from the material exemplar to the object in the input image while preserving other object and scene properties (e.g. object geometry, background, lighting etc.). Performing this task requires understanding the material, geometry, and illumination from both the exemplar and the input image." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "type": "text", + "content": "In practice, estimating all the aforementioned object-level properties and further isolating material information explicitly from " + }, + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "type": "text", + "content": " is challenging since these" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 135, + 116, + 482, + 259 + ], + "blocks": [ + { + "bbox": [ + 135, + 116, + 482, + 259 + ], + "lines": [ + { + "bbox": [ + 135, + 116, + 482, + 259 + ], + "spans": [ + { + "bbox": [ + 135, + 116, + 482, + 259 + ], + "type": "image", + "image_path": "c9ffb1be6ad5f4a0031561bda8ba79984da35379db8ace660683b4fa68fc9eaa.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "lines": [ + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "spans": [ + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "text", + "content": "Fig. 2: ZeST Architecture. Given a material exemplar " + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "text", + "content": " and an input image " + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "text", + "content": ", we first encode material exemplar with an image encoder (e.g., IP-Adaptor). Concurrently, we convert the input image into a depth map " + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "inline_equation", + "content": "D_I" + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "text", + "content": " and a foreground-grayscale image " + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "inline_equation", + "content": "I_{init}" + }, + { + "bbox": [ + 130, + 269, + 482, + 346 + ], + "type": "text", + "content": " to feed into the geometry and latent illumination guidance branch, respectively. By combining the two sources of guidance with the latent features from the material encoding, ZeST can transfer the material properties onto the object in input image while preserving all other attributes." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "spans": [ + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "text", + "content": "properties are entangled in the pixel space. Therefore, we propose to tackle this problem in the latent space of diffusion models. Specifically, we aim to extract a latent representation " + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "text", + "content": " containing the material and texture information that we can then inject into a generative diffusion model " + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "text", + "content": " to generate " + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "inline_equation", + "content": "I_{gen}" + }, + { + "bbox": [ + 130, + 371, + 482, + 421 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 438, + 240, + 450 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 438, + 240, + 450 + ], + "spans": [ + { + "bbox": [ + 132, + 438, + 240, + 450 + ], + "type": "text", + "content": "3.2 ZeST Overview" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 461, + 482, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 461, + 482, + 557 + ], + "spans": [ + { + "bbox": [ + 130, + 461, + 482, + 557 + ], + "type": "text", + "content": "Since there exists no synthetic/real image dataset to supervise the learning of a 2D-to-2D material transfer, we perform the material transfer in a zero-shot training-free manner. We first break down this complex task into sub-problems of (1) encoding the material exemplar, (2) geometry-guided image editing, and (3) making the generation process illumination-aware. Given the recent advances in high-fidelity diffusion models and complementary adapters for image generation, we leverage existing pre-trained modules to tackle each of the sub-problems that together compose our pipeline to perform image-prompted material editing." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "type": "text", + "content": "Figure 2 presents an overview of our pipeline, which comprises three branches to guide the material, geometry, and lighting information, respectively. The Material Encoding branch takes the material exemplar image " + }, + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "type": "text", + "content": " as input, which is processed by the image encoder to obtain a material latent representation " + }, + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 558, + 482, + 605 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": "Concurrently, we feed the input image " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " into Geometry Guidance and Latent Illumination Guidance Branch. The Geometry Guidance branch computes the depth map " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "D_I" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " for the image " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": ", which is used as the input to ControlNet. The Latent Illumination Guidance branch computes a foreground mask " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "F" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " using " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " and creates a foreground-grayscale image " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "I_{init}" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": ", which we use as input to the" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 122, + 197, + 186 + ], + "blocks": [ + { + "bbox": [ + 133, + 122, + 197, + 186 + ], + "lines": [ + { + "bbox": [ + 133, + 122, + 197, + 186 + ], + "spans": [ + { + "bbox": [ + 133, + 122, + 197, + 186 + ], + "type": "image", + "image_path": "dc6935d2a561cbfe10c6dad61839dae2bac28938bf7d8f47760b768d9c3628a7.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 145, + 187, + 186, + 194 + ], + "lines": [ + { + "bbox": [ + 145, + 187, + 186, + 194 + ], + "spans": [ + { + "bbox": [ + 145, + 187, + 186, + 194 + ], + "type": "text", + "content": "Material Exemplar" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 199, + 115, + 267, + 186 + ], + "blocks": [ + { + "bbox": [ + 199, + 115, + 267, + 186 + ], + "lines": [ + { + "bbox": [ + 199, + 115, + 267, + 186 + ], + "spans": [ + { + "bbox": [ + 199, + 115, + 267, + 186 + ], + "type": "image", + "image_path": "1b944240321436b305e779b9ec8c5e4d64a1e884bc7216c713f967edd8fd8585.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 220, + 188, + 247, + 194 + ], + "lines": [ + { + "bbox": [ + 220, + 188, + 247, + 194 + ], + "spans": [ + { + "bbox": [ + 220, + 188, + 247, + 194 + ], + "type": "text", + "content": "Input Image" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 269, + 122, + 334, + 186 + ], + "blocks": [ + { + "bbox": [ + 269, + 122, + 334, + 186 + ], + "lines": [ + { + "bbox": [ + 269, + 122, + 334, + 186 + ], + "spans": [ + { + "bbox": [ + 269, + 122, + 334, + 186 + ], + "type": "image", + "image_path": "6bd85701878cebfeba320524d9e9e75a29c3f0d34527ae6f86e5039f98770de3.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 271, + 187, + 331, + 194 + ], + "lines": [ + { + "bbox": [ + 271, + 187, + 331, + 194 + ], + "spans": [ + { + "bbox": [ + 271, + 187, + 331, + 194 + ], + "type": "text", + "content": "Estimated Depth (Optional)" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 130, + 205, + 482, + 274 + ], + "lines": [ + { + "bbox": [ + 130, + 205, + 482, + 274 + ], + "spans": [ + { + "bbox": [ + 130, + 205, + 482, + 274 + ], + "type": "text", + "content": "Fig. 3: The design choice of IP-Adaptor with ControlNet. Given the material exemplar and the input image, we dive into the different choices of utilizing the IP-Adaptor. In particular we realize that an " + }, + { + "bbox": [ + 130, + 205, + 482, + 274 + ], + "type": "inline_equation", + "content": "\\mathrm{Img2Img + }" + }, + { + "bbox": [ + 130, + 205, + 482, + 274 + ], + "type": "text", + "content": " text module (a) wouldn't properly transfer the materials properly to the main object. On the other hand, ControlNet (b) will preserve the geometry information of the given input. We thus utilize this as the starting point for geometry guidance to further explore the best illumination cues." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 349, + 123, + 413, + 186 + ], + "blocks": [ + { + "bbox": [ + 384, + 115, + 444, + 122 + ], + "lines": [ + { + "bbox": [ + 384, + 115, + 444, + 122 + ], + "spans": [ + { + "bbox": [ + 384, + 115, + 444, + 122 + ], + "type": "text", + "content": "IP-Adaptor Combinations" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 349, + 123, + 413, + 186 + ], + "lines": [ + { + "bbox": [ + 349, + 123, + 413, + 186 + ], + "spans": [ + { + "bbox": [ + 349, + 123, + 413, + 186 + ], + "type": "image", + "image_path": "22b4268e020d42b8c2fbee4e6eb6af9622fe89df468937ea572597296f609790.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 359, + 187, + 402, + 194 + ], + "lines": [ + { + "bbox": [ + 359, + 187, + 402, + 194 + ], + "spans": [ + { + "bbox": [ + 359, + 187, + 402, + 194 + ], + "type": "text", + "content": "(a) " + }, + { + "bbox": [ + 359, + 187, + 402, + 194 + ], + "type": "inline_equation", + "content": "\\mathrm{Img2Img + Text}" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 416, + 123, + 481, + 186 + ], + "blocks": [ + { + "bbox": [ + 416, + 123, + 481, + 186 + ], + "lines": [ + { + "bbox": [ + 416, + 123, + 481, + 186 + ], + "spans": [ + { + "bbox": [ + 416, + 123, + 481, + 186 + ], + "type": "image", + "image_path": "3607dc967fec1149fcf655e858161e3f7399f396cd0b21c31fdf4f0527739638.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 425, + 187, + 473, + 194 + ], + "lines": [ + { + "bbox": [ + 425, + 187, + 473, + 194 + ], + "spans": [ + { + "bbox": [ + 425, + 187, + 473, + 194 + ], + "type": "text", + "content": "(b) ControlNet Model" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "spans": [ + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "text", + "content": "Diffusion Inpainting pipeline. We concatenate the embeddings from ControlNet with the inpainting diffusion model at the corresponding and inject the material embedding " + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "text", + "content": " through the cross-attention. The output of the inpainting diffusion model, " + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "inline_equation", + "content": "I_{gen}" + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "text", + "content": ", with the edited image containing the object in " + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "text", + "content": " cast with material from exemplar image " + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 295, + 480, + 354 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 355, + 480, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 355, + 480, + 379 + ], + "spans": [ + { + "bbox": [ + 130, + 355, + 480, + 379 + ], + "type": "text", + "content": "Our design choices to facilitate computation of material embedding, geometry guidance, and illumination cues are discussed in the following sections." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 131, + 396, + 307, + 407 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 396, + 307, + 407 + ], + "spans": [ + { + "bbox": [ + 131, + 396, + 307, + 407 + ], + "type": "text", + "content": "3.3 Encoding Material Exemplar" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 414, + 482, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 414, + 482, + 510 + ], + "spans": [ + { + "bbox": [ + 130, + 414, + 482, + 510 + ], + "type": "text", + "content": "Given the material exemplar image " + }, + { + "bbox": [ + 130, + 414, + 482, + 510 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 414, + 482, + 510 + ], + "type": "text", + "content": ", this branch encodes the image into a latent representation while preserving its material properties. Previous works [39, 50] address this by finetuning a text-to-image diffusion model to encode the image into a rare token, implicitly treating the rare token as a latent representation that can be used in conjunction with other texts for image generation. However, this approach of optimizing for the material token requires the time-consuming step for every new material exemplar and usually requires 3-5 images to prevent overfitting." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 130, + 510, + 482, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 582 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 582 + ], + "type": "text", + "content": "We draw inspiration from the recently introduced IP-Adapter [49]. The IP adapter uses a CLIP image encoder to extract image features that can be injected into a diffusion model via the cross-attention layers. These features can be used as an additional condition to guide text prompts or other mediums for the generation. For example, one can input an image of a person and then describe \"on the mountain\" with text to obtain an image of the person in the mountains." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": "However, we realize that IP-Adaptor does not work well when combined with an Img2Img pipeline, as shown in Figure 3 (a) for our task. Moreover, adding text guidances like \"changing the apple texture to golden bowl\" does not produce photorealistic output and does not preserve other scene information (i.e. background). This problem of geometry and material entanglement within material embedding " + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": " remains unsolved, thus motivating the need for geometry and illumination guidance." + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 373, + 128 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 373, + 128 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 373, + 128 + ], + "type": "text", + "content": "3.4 Geometry Guidance via Depth Estimation" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "spans": [ + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "text", + "content": "Since decoupling geometry and material properties in images is challenging and requires additional training data, we provide an alternative solution where we enforce a stronger geometry prior to the diffusion model to overwrite the structural information present in " + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "text", + "content": ". To this end, we adopt a depth-based ControlNet to provide geometry guidance from the input image " + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "text", + "content": ". We observe that the geometry information from the depth map " + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "inline_equation", + "content": "D_{I}" + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "text", + "content": " overwrites the geometry information encoded in the " + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 134, + 482, + 242 + ], + "type": "text", + "content": " (see Figure 3 (b)). Note that with the geometry enforced by using depth-based ControlNet, we can successfully transfer the golden material of the bowl to the apple." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 242, + 482, + 325 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 242, + 482, + 325 + ], + "spans": [ + { + "bbox": [ + 130, + 242, + 482, + 325 + ], + "type": "text", + "content": "While the use of ControlNet with IP-Adaptor is introduced in the original IP-Adaptor paper [49], we employ it for a different purpose contrary to applying new structural control over an object in the image (e.g., changing a person's pose). After extensively comparing various components for encoding the material exemplar and input image (analysis in Section 4.2), we find the depth-based guidance from pre-trained ControlNet helps us preserve the original geometry of the object for the task of material transfer." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 326, + 482, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 326, + 482, + 374 + ], + "spans": [ + { + "bbox": [ + 130, + 326, + 482, + 374 + ], + "type": "text", + "content": "While the addition of ControlNet helps preserve the geometry, we observe that the results suffer from inconsistency in preserving the illumination and background from the input image. This is evident in Figure 3, where the background and the lighting changes differ from the input." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 391, + 340, + 403 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 391, + 340, + 403 + ], + "spans": [ + { + "bbox": [ + 131, + 391, + 340, + 403 + ], + "type": "text", + "content": "3.5 Latent-space Illumination Guidance" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 409, + 482, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 409, + 482, + 491 + ], + "spans": [ + { + "bbox": [ + 130, + 409, + 482, + 491 + ], + "type": "text", + "content": "Our final branch is primarily responsible for preserving the illumination and background in the input image. We propose two-fold guidance for illumination in the latent space during generation - an inpainting module and a foreground decoloring process. In addition to the attached IP-Adaptor and ControlNet, we adopt an inpainting diffusion model " + }, + { + "bbox": [ + 130, + 409, + 482, + 491 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 130, + 409, + 482, + 491 + ], + "type": "text", + "content": " instead of a standard generator. Specifically, our ControlNet-inpainting procedure takes in four conditions for image generation:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 249, + 494, + 480, + 506 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 249, + 494, + 480, + 506 + ], + "spans": [ + { + "bbox": [ + 249, + 494, + 480, + 506 + ], + "type": "interline_equation", + "content": "I _ {g e n} = \\mathcal {S} \\left(z _ {M}, D _ {I}, I _ {\\text {i n i t}}, F\\right), \\tag {1}", + "image_path": "07465c642b6c2aeefa2b395990e765f37f6ecf396ca2940405320068bcdf955c.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "inline_equation", + "content": "z_{M}" + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": " is the material encoding, " + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "inline_equation", + "content": "D_{I}" + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": " is the depth map computed for input image " + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "inline_equation", + "content": "I_{init}" + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": " is the initial image to denoise from, and " + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "inline_equation", + "content": "F" + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": " is the foreground mask of target object in " + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 510, + 482, + 545 + ], + "type": "text", + "content": " which we are editing." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": "We conduct an ablation on the various versions of " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "I_{init}" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ", as shown in Figure 4. Specifically, we test out the following settings: (1) using the original input image, (2) initializing the foreground with random noise, and (3) using the foreground grayscale image. Intuitively, directly letting " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "I_{init} = I" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": " (Setting (1)) would be a preferable option as " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": " encompasses implicit lighting information (from the object's shading and the surrounding environment) while conveniently enforces all other parts of the image other than the object to remain the same. In practice, however, we found that using the original image inevitably introduces a strong prior of the base color from the input object (e.g. orange color of pumpkin), which would be entangled with the material base color from " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": " in the output" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 113, + 301, + 205 + ], + "blocks": [ + { + "bbox": [ + 133, + 113, + 301, + 205 + ], + "lines": [ + { + "bbox": [ + 133, + 113, + 301, + 205 + ], + "spans": [ + { + "bbox": [ + 133, + 113, + 301, + 205 + ], + "type": "image", + "image_path": "dfb0ba38f010079d23d3006d67be07164813a0db110694a17879009e1741743a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 213, + 482, + 269 + ], + "lines": [ + { + "bbox": [ + 130, + 213, + 482, + 269 + ], + "spans": [ + { + "bbox": [ + 130, + 213, + 482, + 269 + ], + "type": "text", + "content": "Fig. 4: Ablating input for illumination guidance. To validate our design choice of the foreground-grayscale image for initializing inpainting, we compare the generated results against using the original image and random noise as inputs. The original image presents a strong base color prior that perturbs the generation, while the random image neglects shading information, leading to wrong lighting in both examples." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 314, + 113, + 482, + 205 + ], + "blocks": [ + { + "bbox": [ + 314, + 113, + 482, + 205 + ], + "lines": [ + { + "bbox": [ + 314, + 113, + 482, + 205 + ], + "spans": [ + { + "bbox": [ + 314, + 113, + 482, + 205 + ], + "type": "image", + "image_path": "bd3ab1897a25bc22014e0737d3919a802c5920777df6996f1bfca69166cddb85.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 290, + 482, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 290, + 482, + 398 + ], + "spans": [ + { + "bbox": [ + 130, + 290, + 482, + 398 + ], + "type": "text", + "content": "image. This artifact is sustained even when we significantly extend the number of denoising steps. On the other hand, when initializing " + }, + { + "bbox": [ + 130, + 290, + 482, + 398 + ], + "type": "inline_equation", + "content": "I_{init}" + }, + { + "bbox": [ + 130, + 290, + 482, + 398 + ], + "type": "text", + "content": " with random noise, the method indeed removes the base color prior but also removes the shading information causing incorrect illuminations in the synthesized object (e.g., the left side of the synthesized pumpkin is darker, but light is coming from the left). In our proposed pipeline, we perform grayscale operations in the pixel space for the object region (3). This provides a balanced solution of removing the strong color priors from the input image while keeping the shading cues for the inpainting diffusion model." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 147, + 399, + 317, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 399, + 317, + 411 + ], + "spans": [ + { + "bbox": [ + 147, + 399, + 317, + 411 + ], + "type": "text", + "content": "Thus, we propose to initialize " + }, + { + "bbox": [ + 147, + 399, + 317, + 411 + ], + "type": "inline_equation", + "content": "I_{init}" + }, + { + "bbox": [ + 147, + 399, + 317, + 411 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 235, + 418, + 480, + 433 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 235, + 418, + 480, + 433 + ], + "spans": [ + { + "bbox": [ + 235, + 418, + 480, + 433 + ], + "type": "interline_equation", + "content": "I _ {\\text {i n i t}} = F \\odot I _ {\\text {g r a y}} + (1 - F) \\odot I, \\tag {2}", + "image_path": "d084ec455d7f18c760e33a389c6d726dd64d1b820d4704b48bfc4c386234372f.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "spans": [ + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "type": "text", + "content": "which converts the color of foreground object in the image to grayscale. " + }, + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "type": "inline_equation", + "content": "(1 - F)\\odot I" + }, + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "type": "text", + "content": " implicitly preserves the lighting direction, intensity, and color information, and " + }, + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "type": "inline_equation", + "content": "F\\odot I_{gray}" + }, + { + "bbox": [ + 131, + 439, + 482, + 477 + ], + "type": "text", + "content": " preserves the object's shading information without base color prior." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 131, + 491, + 279, + 504 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 491, + 279, + 504 + ], + "spans": [ + { + "bbox": [ + 131, + 491, + 279, + 504 + ], + "type": "text", + "content": "3.6 Implementation Details" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 510, + 482, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 583 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 583 + ], + "type": "text", + "content": "We implement our method using Stable Diffusion XL Inpainting [36] with the corresponding version of depth-based ControlNet [51] and IP-Adaptor [49]. We use Dense Prediction Transformers for depth estimation [38] and " + }, + { + "bbox": [ + 130, + 510, + 482, + 583 + ], + "type": "inline_equation", + "content": "\\mathrm{Rembg}^1" + }, + { + "bbox": [ + 130, + 510, + 482, + 583 + ], + "type": "text", + "content": " for foreground extraction. Our method is implemented in PyTorch and runs on a single Nvidia A-10 GPU with 24 GB of RAM. For all Dreambooth approaches, we use the official LoRA-Dreambooth provided by Diffusers." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 600, + 230, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 600, + 230, + 613 + ], + "spans": [ + { + "bbox": [ + 132, + 600, + 230, + 613 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 623, + 481, + 648 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 623, + 481, + 648 + ], + "spans": [ + { + "bbox": [ + 130, + 623, + 481, + 648 + ], + "type": "text", + "content": "We evaluate the efficacy of our method against various baselines. We also present several examples of downstream applications using our method." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 133, + 652, + 315, + 666 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 652, + 315, + 666 + ], + "spans": [ + { + "bbox": [ + 133, + 652, + 315, + 666 + ], + "type": "text", + "content": "1 https://github.com/danielgatis/rembg" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 114, + 218, + 171 + ], + "blocks": [ + { + "bbox": [ + 134, + 114, + 218, + 171 + ], + "lines": [ + { + "bbox": [ + 134, + 114, + 218, + 171 + ], + "spans": [ + { + "bbox": [ + 134, + 114, + 218, + 171 + ], + "type": "image", + "image_path": "6178885b58e0aebe464438f43e9c3250ffa0e9a3880bac37227b27afca8c6b0c.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 221, + 114, + 305, + 171 + ], + "blocks": [ + { + "bbox": [ + 221, + 114, + 305, + 171 + ], + "lines": [ + { + "bbox": [ + 221, + 114, + 305, + 171 + ], + "spans": [ + { + "bbox": [ + 221, + 114, + 305, + 171 + ], + "type": "image", + "image_path": "2d0fc42514cc3fbc8bd7bf0be413ae7c1d2c538cc0318303c56a676653ac0d22.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 310, + 114, + 393, + 171 + ], + "blocks": [ + { + "bbox": [ + 310, + 114, + 393, + 171 + ], + "lines": [ + { + "bbox": [ + 310, + 114, + 393, + 171 + ], + "spans": [ + { + "bbox": [ + 310, + 114, + 393, + 171 + ], + "type": "image", + "image_path": "76b79bb3a122c5a49ecc392b43b15a82d113f1b4cccbc1c3be77833fafad970b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 399, + 114, + 481, + 171 + ], + "blocks": [ + { + "bbox": [ + 399, + 114, + 481, + 171 + ], + "lines": [ + { + "bbox": [ + 399, + 114, + 481, + 171 + ], + "spans": [ + { + "bbox": [ + 399, + 114, + 481, + 171 + ], + "type": "image", + "image_path": "1e8c4e41f24bd08f92dc4069b7f70da0ae9e1c95f725a6e98bea54d1d91cd1b7.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 134, + 172, + 217, + 224 + ], + "blocks": [ + { + "bbox": [ + 134, + 172, + 217, + 224 + ], + "lines": [ + { + "bbox": [ + 134, + 172, + 217, + 224 + ], + "spans": [ + { + "bbox": [ + 134, + 172, + 217, + 224 + ], + "type": "image", + "image_path": "2e143f320104a5c8a57d4e2dc3ed1e482e8eb5da770c0cda4f4268012aea2ffa.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 221, + 172, + 304, + 224 + ], + "blocks": [ + { + "bbox": [ + 221, + 172, + 304, + 224 + ], + "lines": [ + { + "bbox": [ + 221, + 172, + 304, + 224 + ], + "spans": [ + { + "bbox": [ + 221, + 172, + 304, + 224 + ], + "type": "image", + "image_path": "0445dbb71d03599314859e6f8a6c286195d1164eca21031cd037659f19aa8afe.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 309, + 172, + 392, + 224 + ], + "blocks": [ + { + "bbox": [ + 309, + 172, + 392, + 224 + ], + "lines": [ + { + "bbox": [ + 309, + 172, + 392, + 224 + ], + "spans": [ + { + "bbox": [ + 309, + 172, + 392, + 224 + ], + "type": "image", + "image_path": "6f3c47923f0b4dafc07d9fc88a650e75ea78daeedf83e51d43c259a325a22dc0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 399, + 172, + 481, + 224 + ], + "blocks": [ + { + "bbox": [ + 399, + 172, + 481, + 224 + ], + "lines": [ + { + "bbox": [ + 399, + 172, + 481, + 224 + ], + "spans": [ + { + "bbox": [ + 399, + 172, + 481, + 224 + ], + "type": "image", + "image_path": "93803b5b6995bc32b9adab9fbb113e7a290073a76843ee23e5dba0eb6786fe92.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 134, + 225, + 217, + 277 + ], + "blocks": [ + { + "bbox": [ + 134, + 225, + 217, + 277 + ], + "lines": [ + { + "bbox": [ + 134, + 225, + 217, + 277 + ], + "spans": [ + { + "bbox": [ + 134, + 225, + 217, + 277 + ], + "type": "image", + "image_path": "186649ce998e725587dfee773e43632daebb768cd49311e183e748da0cca013f.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 221, + 225, + 304, + 277 + ], + "blocks": [ + { + "bbox": [ + 221, + 225, + 304, + 277 + ], + "lines": [ + { + "bbox": [ + 221, + 225, + 304, + 277 + ], + "spans": [ + { + "bbox": [ + 221, + 225, + 304, + 277 + ], + "type": "image", + "image_path": "531ebe7a8d83b8995883edd1b18b40520c6628c0171f9169a097658adbc2bf17.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 309, + 225, + 392, + 277 + ], + "blocks": [ + { + "bbox": [ + 309, + 225, + 392, + 277 + ], + "lines": [ + { + "bbox": [ + 309, + 225, + 392, + 277 + ], + "spans": [ + { + "bbox": [ + 309, + 225, + 392, + 277 + ], + "type": "image", + "image_path": "0dbe4f708f07dfe47e469f0d00252c9e37ba9cd3a46eab4ca69611c995eae68c.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 399, + 225, + 481, + 277 + ], + "blocks": [ + { + "bbox": [ + 399, + 225, + 481, + 277 + ], + "lines": [ + { + "bbox": [ + 399, + 225, + 481, + 277 + ], + "spans": [ + { + "bbox": [ + 399, + 225, + 481, + 277 + ], + "type": "image", + "image_path": "b6a8f5b397dcf37daceefe3534854ae8048b8a4051b13ed7c04b52130db20368.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 134, + 278, + 217, + 330 + ], + "blocks": [ + { + "bbox": [ + 134, + 278, + 217, + 330 + ], + "lines": [ + { + "bbox": [ + 134, + 278, + 217, + 330 + ], + "spans": [ + { + "bbox": [ + 134, + 278, + 217, + 330 + ], + "type": "image", + "image_path": "8c32fd0fca2c10a4c927731af791fb25aa213d84db7559f2b05df532f66af22b.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 340, + 482, + 407 + ], + "lines": [ + { + "bbox": [ + 130, + 340, + 482, + 407 + ], + "spans": [ + { + "bbox": [ + 130, + 340, + 482, + 407 + ], + "type": "text", + "content": "Fig. 5: Qualitative results on diverse materials. We present results of material transfer from a diverse set of material exemplar images. Even when perturbed by lighting and complex geometry, ZeST can still isolate the material information from the exemplar image and transfer to various objects while preserving the original geometry and illumination conditions. Note the change in specular regions as shinier materials are chosen in the case of the car made of brass and the dinosaur made of shiny steel." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 221, + 278, + 304, + 330 + ], + "blocks": [ + { + "bbox": [ + 221, + 278, + 304, + 330 + ], + "lines": [ + { + "bbox": [ + 221, + 278, + 304, + 330 + ], + "spans": [ + { + "bbox": [ + 221, + 278, + 304, + 330 + ], + "type": "image", + "image_path": "6df8986d7a2c797643fc9f431d4ea05abe77b8551f0173a5957fbd6cafa9aabf.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 309, + 278, + 393, + 330 + ], + "blocks": [ + { + "bbox": [ + 309, + 278, + 393, + 330 + ], + "lines": [ + { + "bbox": [ + 309, + 278, + 393, + 330 + ], + "spans": [ + { + "bbox": [ + 309, + 278, + 393, + 330 + ], + "type": "image", + "image_path": "7fd1b6458b6c16d311d0465d6b99f3421dcf8389233a81fa292fa46e79f937e3.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 399, + 278, + 481, + 330 + ], + "blocks": [ + { + "bbox": [ + 399, + 278, + 481, + 330 + ], + "lines": [ + { + "bbox": [ + 399, + 278, + 481, + 330 + ], + "spans": [ + { + "bbox": [ + 399, + 278, + 481, + 330 + ], + "type": "image", + "image_path": "5ef8b5d0b27fa8b2b4474859f925f23eedf603952baab612f5de50bff3fc532e.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 433, + 204, + 443 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 433, + 204, + 443 + ], + "spans": [ + { + "bbox": [ + 132, + 433, + 204, + 443 + ], + "type": "text", + "content": "4.1 Datasets" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 130, + 458, + 481, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 458, + 481, + 495 + ], + "spans": [ + { + "bbox": [ + 130, + 458, + 481, + 495 + ], + "type": "text", + "content": "As the first to propose this problem, we create two datasets for comparison and evaluation. The real-world datasets provide us an understanding of our model's robustness, while the synthetic dataset is used for standard quantitative metrics." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 130, + 495, + 481, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 495, + 481, + 555 + ], + "spans": [ + { + "bbox": [ + 130, + 495, + 481, + 555 + ], + "type": "text", + "content": "Real-World Dataset. We curate a dataset comprising of 30 diverse material exemplars and 30 input images, collected from copyright-free image sources (i.e. Unsplash) and images generated by DALLE-3. All of these images are object-centric, where there exists a main object in the foreground to which we are extracting the material from or applying the material onto." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 130, + 556, + 481, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 556, + 481, + 640 + ], + "spans": [ + { + "bbox": [ + 130, + 556, + 481, + 640 + ], + "type": "text", + "content": "Synthetic Dataset. To perform quantitative evaluation, we use Blender to create a synthesized dataset of 9 materials randomly initialized by adjusting the base color, metallic, and roughness, and 20 meshes of different categories from Objaverse [15] rendered at three random viewpoints each, generating 540 ground-truth renderings. We render spheres assigned with each material individually and use the rendered image the material exemplar and pre-textured mesh rendering as input for all methods." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 131, + 641, + 481, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 641, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 131, + 641, + 481, + 666 + ], + "type": "text", + "content": "While " + }, + { + "bbox": [ + 131, + 641, + 481, + 666 + ], + "type": "inline_equation", + "content": "ZeST" + }, + { + "bbox": [ + 131, + 641, + 481, + 666 + ], + "type": "text", + "content": " is completely training-free, other methods of learning materials (e.g., Dreambooth) require further fine-tuning for every exemplar given. This" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 132, + 114, + 481, + 327 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 481, + 327 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 481, + 327 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 481, + 327 + ], + "type": "image", + "image_path": "866749292d78fd8febbf034a984acceac97a6c8424f47742c294fa09e6cadf35.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 335, + 482, + 403 + ], + "lines": [ + { + "bbox": [ + 130, + 335, + 482, + 403 + ], + "spans": [ + { + "bbox": [ + 130, + 335, + 482, + 403 + ], + "type": "text", + "content": "Fig. 6: Qualitative comparisons against baselines. Given the material exemplar and input image in the first column, we compare our method to five different baselines. Without any geometry guidance, all image editing baselines fail to impose the correct geometry of the input image. On the other hand, using Dreambooth with our geometry and illumination guidance often contains albedo shifts, potentially due to information loss when encoding material properties into a word token." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 425, + 481, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 425, + 481, + 450 + ], + "spans": [ + { + "bbox": [ + 130, + 425, + 481, + 450 + ], + "type": "text", + "content": "makes it infeasible to scale up the two datasets. Both our datasets are of comparable sizes to previous works on finetuning diffusion models [40, 50]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 467, + 257, + 478 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 467, + 257, + 478 + ], + "spans": [ + { + "bbox": [ + 132, + 467, + 257, + 478 + ], + "type": "text", + "content": "4.2 Qualitative Results" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 486, + 482, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 486, + 482, + 616 + ], + "spans": [ + { + "bbox": [ + 130, + 486, + 482, + 616 + ], + "type": "text", + "content": "Material transfer results on real images. To demonstrate the application of ZeST on a wide range of materials and objects, we present examples of material transfer in Figure 5. The first three rows present results on real-world images, while the fourth row shows results using PBR materials [1]. Based on the examples, we observe that the material is properly disentangled from the geometry in the material exemplar and follows the shape of the object in the input image. This is particularly evident in the results of the orange, frog, and Groot toy figure, where the material is completely flat. We also notice accurate shadings in the bust and table examples when comparing them against their inputs. In the car and toy dinosaur examples, the reflections from the exemplars are isolated from the textural patterns and cast reasonably based on the illumination cues." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "content": "Qualitative comparisons. Since our work is the first to perform material transfer in latent space, we modified existing methods to compare against. Specifically, since existing image-guided texture synthesis methods utilize Dreambooth for their first step to encode the textures from images into word tokens [14,39,50]," + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 200 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 200 + ], + "type": "text", + "content": "we set Dreambooth as the backbone for learning material properties and combine with text-guided image editing techniques for comparison, including MasaCtrl and Instruct-Pix2Pix, and using ZeST but swapping out the IP-Adaptor with text. While our method is training-free, Dreambooth requires finetuning for every material exemplar given. We also explore alternative options to combine with IP-Adaptor, including text-guided inpainting and Instruct-Pix2Pix with the prompt \"Change the texture of the object\"." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 200, + 480, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 200, + 480, + 355 + ], + "spans": [ + { + "bbox": [ + 130, + 200, + 480, + 355 + ], + "type": "text", + "content": "We present qualitative comparisons against the baselines on four exemplar and input images in Figure 6. By using Inpainting with Text prompt instead of ControlNet, the model ignores the geometry of the original input when casting the materials. In both cases when using Instruct-Pix2Pix (with IP-Adaptor or Dreambooth), the geometry of all objects is better preserved, but the model fails to capture the material property from the material exemplar image. The combination of Dreambooth and MasaCtrl fails to preserve the geometry of the object in the input image and misattributes the material. The closest baseline to ours is Dreambooth with our proposed geometry and illumination guidance; however, we observe that the word encoding process results in some information loss as evident in the color shifts of the backpack and the astronaut figure. Furthermore, the method requires additional training for every material exemplar, whereas ZeST takes roughly 15 seconds to generate the image." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 356, + 479, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 356, + 479, + 416 + ], + "spans": [ + { + "bbox": [ + 130, + 356, + 479, + 416 + ], + "type": "text", + "content": "Our method, ZeST, performs the task effectively by retaining the object geometry, scene illumination, and attributing the material correctly. Additionally, note that ZeST adapts to more challenging material exemplar images, such as transparent materials (glass cup in Figure 6 Row 3) and images with other minor objects (additional hand in Figure 6 Row 4)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 432, + 293, + 444 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 432, + 293, + 444 + ], + "spans": [ + { + "bbox": [ + 131, + 432, + 293, + 444 + ], + "type": "text", + "content": "4.3 Quantitative Comparisons" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 450, + 479, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 450, + 479, + 533 + ], + "spans": [ + { + "bbox": [ + 130, + 450, + 479, + 533 + ], + "type": "text", + "content": "We follow previous work [41, 50] and use the synthetic images to compare all methods in terms of PSNR, LPIPS [52], and CLIP similarity score [37] against ground truth renderings. We also incorporate another DreamSim [19], a more recent metric that is more similar to human references. We grab IP-Adaptor + Instruct-Pix2Pix and Dreambooth + our geometry and illumination guidance as baselines, as they are the strongest (and only) performers from our qualitative comparisons that can roughly edit the material based on the geometry." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 533, + 479, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 479, + 605 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 479, + 605 + ], + "type": "text", + "content": "Table 1 (left) presents our results. We see a dramatic improvement when shifting from the instruct-pix2pix pipeline to our geometry and illumination guidance. While using Dreambooth performs similarly to our IP-Adaptor in the synthetic dataset, it requires a fine-tuned model for each material exemplar, making it unfeasible to scale up. In addition, we show in the next section that our method excels in real-world datasets." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 605, + 479, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 479, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 479, + 665 + ], + "type": "text", + "content": "- **User Study.** We also create a user study with 16 participants to understand the capability of our model given real-world materials tested on real images. Each subject is shown 5 random samples from the 900 combinations generated from the dataset with our method and against the two strongest baselines: Dreambooth + ControlNet-Inpainting and IP-Adaptor + Instruct-Pix2Pix. We ask" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 134, + 201, + 480, + 240 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 192 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 192 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 192 + ], + "type": "text", + "content": "Table 1: Quantitative Comparisons and User Study. We grab the strongest baselines in our qualitative comparisons for additional studies. Left: We measure the PSNR, LPIPS [52], CLIP similarity score [37], and DreamSim [19] in a quantitative study on the synthetic dataset of 540 exemplar-input combinations. Right: We perform a user study to evaluate the material fidelity and photorealism of the edited images from each method. We randomly sample 5 out of 900 real-world exemplar-input combinations for each of the 16 participants." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 134, + 201, + 480, + 240 + ], + "lines": [ + { + "bbox": [ + 134, + 201, + 480, + 240 + ], + "spans": [ + { + "bbox": [ + 134, + 201, + 480, + 240 + ], + "type": "table", + "html": "
PSNR↑LPIPS↓CLIP↑DreamSim↓Fidelity↑Photorealism↑
IP-Adaptor + Instruct-Pix2Pix17.080.0990.7400.390IP-Adaptor + Instruct-Pix2Pix1.48
DB + Our Geo/illum. Guidance25.520.0580.8740.238DB + Our Geo/illum. Guidance3.25
Ours25.590.0530.8830.198Ours4.05
", + "image_path": "dd6b14bc40f3355080ce6f1408f2f2afd9a06cbb0270e0b191549ea64431d764.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 133, + 251, + 301, + 335 + ], + "blocks": [ + { + "bbox": [ + 133, + 251, + 301, + 335 + ], + "lines": [ + { + "bbox": [ + 133, + 251, + 301, + 335 + ], + "spans": [ + { + "bbox": [ + 133, + 251, + 301, + 335 + ], + "type": "image", + "image_path": "03d604531205a5909388ad7508cf0ff37bac1bb7093e4759877b8ae88b282597.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 345, + 482, + 390 + ], + "lines": [ + { + "bbox": [ + 130, + 345, + 482, + 390 + ], + "spans": [ + { + "bbox": [ + 130, + 345, + 482, + 390 + ], + "type": "text", + "content": "Fig. 7: Robustness to lighting and object pose. We present two types of robustness testing. (a): Robustness to changing the material exemplar lighting and pose. (b): Zooming into the material exemplar. Our model yields highly similar results in both, showing the capability to adapt to these external changes." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 321, + 252, + 481, + 335 + ], + "blocks": [ + { + "bbox": [ + 321, + 252, + 481, + 335 + ], + "lines": [ + { + "bbox": [ + 321, + 252, + 481, + 335 + ], + "spans": [ + { + "bbox": [ + 321, + 252, + 481, + 335 + ], + "type": "image", + "image_path": "cb49cb9c9605a9a92ac337bb38c19cc241d7c49656a3731416fda08b5411ae9a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 413, + 480, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 413, + 480, + 460 + ], + "spans": [ + { + "bbox": [ + 130, + 413, + 480, + 460 + ], + "type": "text", + "content": "each subject to rate each image from 1 to 5 based on (1) material fidelity: how close the material in the generated image is compared to the original exemplar and (2) photorealism: how realistic the generated image is. Our results are summarized in Table 1 (right)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 461, + 482, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 461, + 482, + 521 + ], + "spans": [ + { + "bbox": [ + 130, + 461, + 482, + 521 + ], + "type": "text", + "content": "Our results show significant improvements from the two baselines in both material fidelity and photorealism of the edited image. The score improvements are also greater in real-world scenarios compared to synthetic ones. This could be the result of information loss during finetuning and overfitting to the exemplar background, which is less significant under controlled synthetic scenarios." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 538, + 287, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 538, + 287, + 550 + ], + "spans": [ + { + "bbox": [ + 132, + 538, + 287, + 550 + ], + "type": "text", + "content": "4.4 Robustness of the Model" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 558, + 480, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 480, + 581 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 480, + 581 + ], + "type": "text", + "content": "In addition to the diverse set of results presented in Figure 5, we extensively test out the behavior of ZeST with special cases of material exemplar images." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 582, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 582, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 582, + 482, + 666 + ], + "type": "text", + "content": "Relighting and rotating the object in the material exemplar image. A good material extractor should be agnostic to small lighting and rotation changes of the same object used as the material exemplar. To evaluate this, we render a random material and cast it onto an irregular-shaped pumpkin (another example is in the Appendix). We then render three samples of the pumpkin, a default lighting orientation, a change in lighting direction pitch by 120 degrees, and a random rotation, as shown in 7 (a). The transferred materials onto the dolphin" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 132, + 114, + 304, + 213 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 304, + 213 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 304, + 213 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 304, + 213 + ], + "type": "image", + "image_path": "f39176c7e08a26ed040d85404e355880ce6e9ad7c4f78e0c132486ae8f94f358.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 221, + 482, + 266 + ], + "lines": [ + { + "bbox": [ + 130, + 221, + 482, + 266 + ], + "spans": [ + { + "bbox": [ + 130, + 221, + 482, + 266 + ], + "type": "text", + "content": "Fig. 8: Multiple Material Transfers in a Single Image. By replacing the foreground extraction with an open-vocabulary segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be applied iteratively to cast different material properties to different objects in a single RGB image." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 304, + 114, + 482, + 213 + ], + "blocks": [ + { + "bbox": [ + 304, + 114, + 482, + 213 + ], + "lines": [ + { + "bbox": [ + 304, + 114, + 482, + 213 + ], + "spans": [ + { + "bbox": [ + 304, + 114, + 482, + 213 + ], + "type": "image", + "image_path": "3480b3bf84ce0fab480ff7cf033fb974d797600a4e79a1f3d9370c347579539d.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 132, + 277, + 480, + 345 + ], + "blocks": [ + { + "bbox": [ + 132, + 277, + 480, + 345 + ], + "lines": [ + { + "bbox": [ + 132, + 277, + 480, + 345 + ], + "spans": [ + { + "bbox": [ + 132, + 277, + 480, + 345 + ], + "type": "image", + "image_path": "ac3d6242c2d56fd8cff2a791c5dd6db9e50c30964736d667ad2e566cc244c101.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 356, + 482, + 389 + ], + "lines": [ + { + "bbox": [ + 130, + 356, + 482, + 389 + ], + "spans": [ + { + "bbox": [ + 130, + 356, + 482, + 389 + ], + "type": "text", + "content": "Fig.9: Lighting-aware Image Editing. Given a rendering of a textured mesh, we can alter " + }, + { + "bbox": [ + 130, + 356, + 482, + 389 + ], + "type": "inline_equation", + "content": "ZeST" + }, + { + "bbox": [ + 130, + 356, + 482, + 389 + ], + "type": "text", + "content": " slightly to achieve lighting-aware material edit. It can be seen from both examples where the reflection can be disentangled from the object texture." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 413, + 480, + 436 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 413, + 480, + 436 + ], + "spans": [ + { + "bbox": [ + 130, + 413, + 480, + 436 + ], + "type": "text", + "content": "remain roughly consistent across all samples, showing that our method is fairly resistant to these changes at a small scale." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 437, + 482, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 437, + 482, + 521 + ], + "spans": [ + { + "bbox": [ + 130, + 437, + 482, + 521 + ], + "type": "text", + "content": "Effect of image scale of material exemplar image. To examine the effect of the scale of the material exemplar, we first use an image of a woolen cloth material with a distinctive repeating pattern and apply our method to an image of a chair. Then, we zoom into the exemplar image manually to the edge only very few repeated patterns are left. Our results in Figure 7 (b) show that while the scale of the material is drastically different, the model automatically re-adjusts the patterns into a reasonable size to be cast onto the input image." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 538, + 224, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 538, + 224, + 550 + ], + "spans": [ + { + "bbox": [ + 132, + 538, + 224, + 550 + ], + "type": "text", + "content": "4.5 Applications" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 558, + 482, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 482, + 629 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 482, + 629 + ], + "type": "text", + "content": "Applying multiple materials to multiple objects. By replacing the foreground extraction with a segmentation module (e.g., SAM) to obtain multiple masks, ZeST can be used to iteratively change multiple materials in a single image. Figure 8 presents two examples of editing multiple objects in a single image. As evident in the transparent glass chair where the wooden table behind is roughly visible, ZeST generalizes to complex scenes with multiple objects." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 630, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 630, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 630, + 482, + 666 + ], + "type": "text", + "content": "Lighting-aware Material Transfer. Given a material exemplar image and an untextured mesh rendered under multiple illumination conditions, " + }, + { + "bbox": [ + 130, + 630, + 482, + 666 + ], + "type": "inline_equation", + "content": "ZeST" + }, + { + "bbox": [ + 130, + 630, + 482, + 666 + ], + "type": "text", + "content": " can also perform lighting-aware material transfer. Specifically, we first generate the" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 114, + 301, + 181 + ], + "blocks": [ + { + "bbox": [ + 133, + 114, + 301, + 181 + ], + "lines": [ + { + "bbox": [ + 133, + 114, + 301, + 181 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 301, + 181 + ], + "type": "image", + "image_path": "2bf2c1a730e8725f327018ba985c9482e8a8ecc413c023c737abf0387ae84527.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 189, + 482, + 245 + ], + "lines": [ + { + "bbox": [ + 130, + 189, + 482, + 245 + ], + "spans": [ + { + "bbox": [ + 130, + 189, + 482, + 245 + ], + "type": "text", + "content": "Fig. 10: Limitations. Our method primarily fails in two modes. (a) The model sometimes picks the most \"probable\" areas to transfer the material, instead of casting the material on the entire object. (b) If two textures are present in the exemplar image (e.g., foreground and background of the tennis ball, the glazed top and bottom logo of the cup), the model sometimes combine both materials when performing the edit." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 313, + 114, + 482, + 181 + ], + "blocks": [ + { + "bbox": [ + 313, + 114, + 482, + 181 + ], + "lines": [ + { + "bbox": [ + 313, + 114, + 482, + 181 + ], + "spans": [ + { + "bbox": [ + 313, + 114, + 482, + 181 + ], + "type": "image", + "image_path": "853490493141d119b79cb7ae57133a2f1ebedb10f4b4f16f3f15045c2665a7b8.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 266, + 482, + 375 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 266, + 482, + 375 + ], + "spans": [ + { + "bbox": [ + 130, + 266, + 482, + 375 + ], + "type": "text", + "content": "materials and textures of the image under Lighting 1 using ZeST. Then, by fixing the same seed during generation and using the generating image given the first lighting as the input to the second, we can enforce consistency in the material and texture generated (details of implementation in Appendix) while changing the reflections. We show examples of transferring the glazed cup material to two mesh renders in Figure 9. ZeST successfully disentangles the reflections while keeping most textural patterns consistent between the two images. This technique could potentially be applied jointly with other 3D texture synthesis works [10] and be helpful to applications such as e-commerce design." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 391, + 218, + 402 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 391, + 218, + 402 + ], + "spans": [ + { + "bbox": [ + 132, + 391, + 218, + 402 + ], + "type": "text", + "content": "4.6 Limitations" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "spans": [ + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "type": "text", + "content": "Since " + }, + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "type": "inline_equation", + "content": "ZeST" + }, + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "type": "text", + "content": " operates majorly in the latent space, the model sometimes exhibits uncontrollable behaviors based on its image understanding. Figure 10 presents two forms of more frequent failure cases: (a) Partial material transfer: the material is only transferred to parts instead of the entirety of the object. We hypothesize that the failure stems from the entanglement of material properties and the exemplar's identity, as the material is only applied to where it seems the most probable (e.g., only apply the jacket material to the statue's body). (b) Blending multiple materials: since the current IP-Adaptor does not have a module to extract regions of an image for material transfer, " + }, + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "type": "inline_equation", + "content": "ZeST" + }, + { + "bbox": [ + 130, + 409, + 482, + 529 + ], + "type": "text", + "content": " sometimes mixes up multiple materials in the exemplar image during transfer." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 546, + 220, + 559 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 546, + 220, + 559 + ], + "spans": [ + { + "bbox": [ + 132, + 546, + 220, + 559 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "content": "We present ZeST, a zero-shot, training-free method for exemplar-based material-editing. ZeST is built completely using readily available pre-trained models and demonstrates generalizable and robust results on real images. We curate synthetic and real image datasets to evaluate the performance of our approach. We also demonstrate downstream applications like multiple edits in a single image and material-aware relighting. ZeST serves as a strong starting point for future research in image-to-image material transfer, implying opportunities of leveraging pre-trained image diffusion models for complex graphic designing tasks." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 197, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 197, + 126 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 197, + 126 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 140, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 138, + 140, + 397, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 140, + 397, + 150 + ], + "spans": [ + { + "bbox": [ + 138, + 140, + 397, + 150 + ], + "type": "text", + "content": "1. https://wwwtexts.com/browse/pbr-materials/114558" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 151, + 481, + 172 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 151, + 481, + 172 + ], + "spans": [ + { + "bbox": [ + 138, + 151, + 481, + 172 + ], + "type": "text", + "content": "2. Aittala, M., Weyrich, T., Lehtinen, J.: Practical svbrdf capture in the frequency domain. ACM Trans. Graph. 32(4), 110-1 (2013)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 173, + 481, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 173, + 481, + 194 + ], + "spans": [ + { + "bbox": [ + 138, + 173, + 481, + 194 + ], + "type": "text", + "content": "3. Aittala, M., Weyrich, T., Lehtinen, J., et al.: Two-shot svbrdf capture for stationary materials. ACM Trans. Graph. 34(4), 110-1 (2015)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 194, + 481, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 194, + 481, + 216 + ], + "spans": [ + { + "bbox": [ + 138, + 194, + 481, + 216 + ], + "type": "text", + "content": "4. Bar-Tal, O., Yariv, L., Lipman, Y., Dekel, T.: Multidiffusion: Fusing diffusion paths for controlled image generation (2023)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 217, + 481, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 217, + 481, + 248 + ], + "spans": [ + { + "bbox": [ + 138, + 217, + 481, + 248 + ], + "type": "text", + "content": "5. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3479-3487 (2015)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 249, + 481, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 249, + 481, + 271 + ], + "spans": [ + { + "bbox": [ + 138, + 249, + 481, + 271 + ], + "type": "text", + "content": "6. Bhat, S.F., Mitra, N.J., Wonka, P.: Loosecontrol: Lifting controlnet for generalized depth conditioning. arXiv preprint arXiv:2312.03079 (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 271, + 481, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 271, + 481, + 304 + ], + "spans": [ + { + "bbox": [ + 138, + 271, + 481, + 304 + ], + "type": "text", + "content": "7. Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: Learning to follow image editing instructions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18392-18402 (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 304, + 481, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 304, + 481, + 335 + ], + "spans": [ + { + "bbox": [ + 138, + 304, + 481, + 335 + ], + "type": "text", + "content": "8. Cao, M., Wang, X., Qi, Z., Shan, Y., Qie, X., Zheng, Y.: Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. arXiv preprint arXiv:2304.08465 (2023)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 337, + 481, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 337, + 481, + 369 + ], + "spans": [ + { + "bbox": [ + 138, + 337, + 481, + 369 + ], + "type": "text", + "content": "9. Cao, T., Kreis, K., Fidler, S., Sharp, N., Yin, K.: Texfusion: Synthesizing 3d textures with text-guided image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4169-4181 (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 370, + 481, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 370, + 481, + 402 + ], + "spans": [ + { + "bbox": [ + 138, + 370, + 481, + 402 + ], + "type": "text", + "content": "0. Chen, D.Z., Siddiqui, Y., Lee, H.Y., Tulyakov, S., Nießner, M.: Text2tex: Text-driven texture synthesis via diffusion models. arXiv preprint arXiv:2303.11396 (2023)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 403, + 481, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 403, + 481, + 435 + ], + "spans": [ + { + "bbox": [ + 138, + 403, + 481, + 435 + ], + "type": "text", + "content": "1. Chen, M., Laina, I., Vedaldi, A.: Training-free layout control with cross-attention guidance. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 5343-5353 (2024)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 435, + 481, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 435, + 481, + 468 + ], + "spans": [ + { + "bbox": [ + 138, + 435, + 481, + 468 + ], + "type": "text", + "content": "2. Chen, W., Hu, H., Li, Y., Ruiz, N., Jia, X., Chang, M.W., Cohen, W.W.: Subject-driven text-to-image generation via apprenticeship learning. Advances in Neural Information Processing Systems 36 (2024)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 468, + 481, + 501 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 468, + 481, + 501 + ], + "spans": [ + { + "bbox": [ + 138, + 468, + 481, + 501 + ], + "type": "text", + "content": "3. Cheng, T.Y., Gadelha, M., Groueix, T., Fisher, M., Mech, R., Markham, A., Trigoni, N.: Learning continuous 3d words for text-to-image generation. arXiv preprint arXiv:2402.08654 (2024)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 501, + 481, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 501, + 481, + 533 + ], + "spans": [ + { + "bbox": [ + 138, + 501, + 481, + 533 + ], + "type": "text", + "content": "4. Corneanu, C., Gadde, R., Martinez, A.M.: Latentpaint: Image inpainting in latent space with diffusion models. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 4334-4343 (2024)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 534, + 481, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 534, + 481, + 578 + ], + "spans": [ + { + "bbox": [ + 138, + 534, + 481, + 578 + ], + "type": "text", + "content": "5. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A universe of annotated 3d objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13142-13153 (2023)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 578, + 481, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 578, + 481, + 610 + ], + "spans": [ + { + "bbox": [ + 138, + 578, + 481, + 610 + ], + "type": "text", + "content": "6. Delanoy, J., Lagunas, M., Condor, J., Gutierrez, D., Masia, B.: A generative framework for image-based editing of material appearance using perceptual attributes. In: Computer Graphics Forum. vol. 41, pp. 453-464. Wiley Online Library (2022)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 610, + 481, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 610, + 481, + 643 + ], + "spans": [ + { + "bbox": [ + 138, + 610, + 481, + 643 + ], + "type": "text", + "content": "7. Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., Bousseau, A.: Flexible svbrdf capture with a multi-image deep network. In: Computer graphics forum. vol. 38, pp. 1-13. Wiley Online Library (2019)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 138, + 643, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 643, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 643, + 481, + 665 + ], + "type": "text", + "content": "8. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 133, + 116, + 481, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 481, + 149 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 481, + 149 + ], + "type": "text", + "content": "19. Fu*, S., Tamir*, N., Sundaram*, S., Chai, L., Zhang, R., Dekel, T., Isola, P.: Dreamsim: Learning new dimensions of human visual similarity using synthetic data. NeurIPS (2023)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 150, + 481, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 150, + 481, + 183 + ], + "spans": [ + { + "bbox": [ + 132, + 150, + 481, + 183 + ], + "type": "text", + "content": "20. Ge, S., Park, T., Zhu, J.Y., Huang, J.B.: Expressive text-to-image generation with rich text. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7545-7556 (2023)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 183, + 481, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 183, + 481, + 215 + ], + "spans": [ + { + "bbox": [ + 132, + 183, + 481, + 215 + ], + "type": "text", + "content": "21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139-144 (2020)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 216, + 481, + 247 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 216, + 481, + 247 + ], + "spans": [ + { + "bbox": [ + 132, + 216, + 481, + 247 + ], + "type": "text", + "content": "22. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 249, + 481, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 249, + 481, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 249, + 481, + 270 + ], + "type": "text", + "content": "23. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 271, + 481, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 271, + 481, + 303 + ], + "spans": [ + { + "bbox": [ + 132, + 271, + 481, + 303 + ], + "type": "text", + "content": "24. Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M., Salimans, T.: Cascaded diffusion models for high fidelity image generation. The Journal of Machine Learning Research 23(1), 2249-2281 (2022)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 304, + 481, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 304, + 481, + 325 + ], + "spans": [ + { + "bbox": [ + 132, + 304, + 481, + 325 + ], + "type": "text", + "content": "25. Ho, J., Salimans, T.: Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 326, + 481, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 326, + 481, + 357 + ], + "spans": [ + { + "bbox": [ + 132, + 326, + 481, + 357 + ], + "type": "text", + "content": "26. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10124-10134 (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 358, + 481, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 358, + 481, + 390 + ], + "spans": [ + { + "bbox": [ + 132, + 358, + 481, + 390 + ], + "type": "text", + "content": "27. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 391, + 481, + 413 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 391, + 481, + 413 + ], + "spans": [ + { + "bbox": [ + 132, + 391, + 481, + 413 + ], + "type": "text", + "content": "28. Khan, E.A., Reinhard, E., Fleming, R.W., Bülthoff, H.H.: Image-based material editing. ACM Transactions on Graphics (TOG) 25(3), 654-663 (2006)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 414, + 481, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 414, + 481, + 445 + ], + "spans": [ + { + "bbox": [ + 132, + 414, + 481, + 445 + ], + "type": "text", + "content": "29. Kumari, N., Zhang, B., Zhang, R., Shechtman, E., Zhu, J.Y.: Multi-concept customization of text-to-image diffusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1931-1941 (2023)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 447, + 481, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 447, + 481, + 479 + ], + "spans": [ + { + "bbox": [ + 132, + 447, + 481, + 479 + ], + "type": "text", + "content": "30. Li, Y., Liu, H., Wu, Q., Mu, F., Yang, J., Gao, J., Li, C., Lee, Y.J.: Gligen: Open-set grounded text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 22511-22521 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 479, + 481, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 479, + 481, + 511 + ], + "spans": [ + { + "bbox": [ + 132, + 479, + 481, + 511 + ], + "type": "text", + "content": "31. Liang, Y., Wakaki, R., Nobuhara, S., Nishino, K.: Multimodal material segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 19800-19808 (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 512, + 481, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 512, + 481, + 533 + ], + "spans": [ + { + "bbox": [ + 132, + 512, + 481, + 533 + ], + "type": "text", + "content": "32. Lopes, I., Pizzati, F., de Charette, R.: Material palette: Extraction of materials from a single image. arXiv preprint arXiv:2311.17060 (2023)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 534, + 481, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 534, + 481, + 566 + ], + "spans": [ + { + "bbox": [ + 132, + 534, + 481, + 566 + ], + "type": "text", + "content": "33. Michel, O., Bhattad, A., VanderBilt, E., Krishna, R., Kembhavi, A., Gupta, T.: Object 3dit: Language-guided 3d-aware image editing. Advances in Neural Information Processing Systems 36 (2024)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 567, + 481, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 567, + 481, + 599 + ], + "spans": [ + { + "bbox": [ + 132, + 567, + 481, + 599 + ], + "type": "text", + "content": "34. Mou, C., Wang, X., Xie, L., Zhang, J., Qi, Z., Shan, Y., Qie, X.: T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453 (2023)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 600, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 600, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 132, + 600, + 481, + 632 + ], + "type": "text", + "content": "35. Pandey, K., Guerrero, P., Gadelha, M., Hold-Geoffroy, Y., Singh, K., Mitra, N.: Diffusion handles: Enabling 3d edits for diffusion models by lifting activations to 3d. arXiv preprint arXiv:2312.02190 (2023)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 632, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 632, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 632, + 481, + 665 + ], + "type": "text", + "content": "36. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 657 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 160 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 160 + ], + "type": "text", + "content": "37. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 161, + 481, + 193 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 161, + 481, + 193 + ], + "spans": [ + { + "bbox": [ + 130, + 161, + 481, + 193 + ], + "type": "text", + "content": "38. Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 12179-12188 (2021)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 194, + 481, + 214 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 194, + 481, + 214 + ], + "spans": [ + { + "bbox": [ + 132, + 194, + 481, + 214 + ], + "type": "text", + "content": "39. Richardson, E., Metzer, G., Alaluf, Y., Giryes, R., Cohen-Or, D.: Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721 (2023)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 215, + 481, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 215, + 481, + 246 + ], + "spans": [ + { + "bbox": [ + 132, + 215, + 481, + 246 + ], + "type": "text", + "content": "40. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 247, + 481, + 278 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 247, + 481, + 278 + ], + "spans": [ + { + "bbox": [ + 132, + 247, + 481, + 278 + ], + "type": "text", + "content": "41. Sharma, P., Jampani, V., Li, Y., Jia, X., Lagun, D., Durand, F., Freeman, W.T., Matthews, M.: Alchemist: Parametric control of material properties with diffusion models. arXiv preprint arXiv:2312.02970 (2023)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 279, + 481, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 279, + 481, + 311 + ], + "spans": [ + { + "bbox": [ + 132, + 279, + 481, + 311 + ], + "type": "text", + "content": "42. Sharma, P., Philip, J., Gharbi, M., Freeman, B., Durand, F., Deschaintre, V.: Materialistic: Selecting similar materials in images. ACM Transactions on Graphics (TOG) 42(4), 1-14 (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 312, + 481, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 312, + 481, + 333 + ], + "spans": [ + { + "bbox": [ + 132, + 312, + 481, + 333 + ], + "type": "text", + "content": "43. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 334, + 481, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 334, + 481, + 365 + ], + "spans": [ + { + "bbox": [ + 132, + 334, + 481, + 365 + ], + "type": "text", + "content": "44. Subias, J.D., Lagunas, M.: In-the-wild material appearance editing using perceptual attributes. In: Computer Graphics Forum. vol. 42, pp. 333-345. Wiley Online Library (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 366, + 481, + 397 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 366, + 481, + 397 + ], + "spans": [ + { + "bbox": [ + 132, + 366, + 481, + 397 + ], + "type": "text", + "content": "45. Upchurch, P., Niu, R.: A dense material segmentation dataset for indoor and outdoor scene parsing. In: European Conference on Computer Vision. pp. 450-466. Springer (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 398, + 481, + 419 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 398, + 481, + 419 + ], + "spans": [ + { + "bbox": [ + 132, + 398, + 481, + 419 + ], + "type": "text", + "content": "46. Voynov, A., Chu, Q., Cohen-Or, D., Aberman, K.: " + }, + { + "bbox": [ + 132, + 398, + 481, + 419 + ], + "type": "inline_equation", + "content": "p+" + }, + { + "bbox": [ + 132, + 398, + 481, + 419 + ], + "type": "text", + "content": ": Extended textual conditioning in text-to-image generation. arXiv preprint arXiv:2303.09522 (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 420, + 481, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 420, + 481, + 440 + ], + "spans": [ + { + "bbox": [ + 132, + 420, + 481, + 440 + ], + "type": "text", + "content": "47. Wang, X., Darrell, T., Rambhatla, S.S., Girdhar, R., Misra, I.: Instance-diffusion: Instance-level control for image generation. arXiv preprint arXiv:2402.03290 (2024)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 441, + 481, + 483 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 441, + 481, + 483 + ], + "spans": [ + { + "bbox": [ + 132, + 441, + 481, + 483 + ], + "type": "text", + "content": "48. Yang, Z., Wang, J., Gan, Z., Li, L., Lin, K., Wu, C., Duan, N., Liu, Z., Liu, C., Zeng, M., et al.: Reco: Region-controlled text-to-image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14246-14255 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 484, + 481, + 516 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 484, + 481, + 516 + ], + "spans": [ + { + "bbox": [ + 132, + 484, + 481, + 516 + ], + "type": "text", + "content": "49. Ye, H., Zhang, J., Liu, S., Han, X., Yang, W.: Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721 (2023)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 517, + 481, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 517, + 481, + 559 + ], + "spans": [ + { + "bbox": [ + 132, + 517, + 481, + 559 + ], + "type": "text", + "content": "50. Yeh, Y.Y., Huang, J.B., Kim, C., Xiao, L., Nguyen-Phuoc, T., Khan, N., Zhang, C., Chandraker, M., Marshall, C.S., Dong, Z., et al.: Texturedreamer: Image-guided texture synthesis through geometry-aware diffusion. arXiv preprint arXiv:2401.09416 (2024)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 560, + 481, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 560, + 481, + 592 + ], + "spans": [ + { + "bbox": [ + 132, + 560, + 481, + 592 + ], + "type": "text", + "content": "51. Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3836-3847 (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 593, + 481, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 593, + 481, + 624 + ], + "spans": [ + { + "bbox": [ + 132, + 593, + 481, + 624 + ], + "type": "text", + "content": "52. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 586-595 (2018)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 625, + 481, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 625, + 481, + 657 + ], + "spans": [ + { + "bbox": [ + 132, + 625, + 481, + 657 + ], + "type": "text", + "content": "53. Zhao, S., Chen, D., Chen, Y.C., Bao, J., Hao, S., Yuan, L., Wong, K.Y.K.: Unictrlnet: All-in-one control to text-to-image diffusion models. Advances in Neural Information Processing Systems 36 (2024)" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 424, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeST" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_content_list.json b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..158191eccf0cc4f6b1f306bfc68c533f423e29e4 --- /dev/null +++ b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_content_list.json @@ -0,0 +1,1861 @@ +[ + { + "type": "text", + "text": "Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems", + "text_level": 1, + "bbox": [ + 217, + 140, + 785, + 185 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yasar Utku Alçalar and Mehmet Akçakaya", + "bbox": [ + 334, + 212, + 666, + 227 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of Minnesota, Minneapolis {alcal029, akcakaya}@umn.edu", + "bbox": [ + 375, + 239, + 625, + 267 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Diffusion models have emerged as powerful generative techniques for solving inverse problems. Despite their success in a variety of inverse problems in imaging, these models require many steps to converge, leading to slow inference time. Recently, there has been a trend in diffusion models for employing sophisticated noise schedules that involve more frequent iterations of timesteps at lower noise levels, thereby improving image generation and convergence speed. However, application of these ideas for solving inverse problems with diffusion models remain challenging, as these noise schedules do not perform well when using empirical tuning for the forward model log-likelihood term weights. To tackle these challenges, we propose zero-shot approximate posterior sampling (ZAPS) that leverages connections to zero-shot physics-driven deep learning. ZAPS fixes the number of sampling steps, and uses zero-shot training with a physics-guided loss function to learn log-likelihood weights at each irregular timestep. We apply ZAPS to the recently proposed diffusion posterior sampling method as baseline, though ZAPS can also be used with other posterior sampling diffusion models. We further approximate the Hessian of the logarithm of the prior using a diagonalization approach with learnable diagonal entries for computational efficiency. These parameters are optimized over a fixed number of epochs with a given computational budget. Our results for various noisy inverse problems, including Gaussian and motion deblurring, inpainting, and super-resolution show that ZAPS reduces inference time, provides robustness to irregular noise schedules and improves reconstruction quality. Code is available at https://github.com/ualcalar17/ZAPS.", + "bbox": [ + 261, + 304, + 738, + 650 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Diffusion Models $\\cdot$ Zero-Shot Learning $\\cdot$ Inverse Problems $\\cdot$ Plug-and-Play (PnP) Methods $\\cdot$ Unrolled Networks $\\cdot$ Bayesian Methods", + "bbox": [ + 261, + 662, + 738, + 691 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 217, + 717, + 374, + 733 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The forefront of deep generative models is now dominated by diffusion models [16, 28, 30, 32, 34] in the intricate task of image generation [11]. Their capabilities extend across various domains, including computer vision [2], natural language processing [17] and temporal data modeling [1]. Recently, diffusion models also showed great success in solving noiseless [5, 7, 33, 34] and noisy inverse problems [6, 21, 29, 31], owing to their capability to model complicated", + "bbox": [ + 217, + 748, + 785, + 839 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/191e42ff0a9223d261c4890a46d71f7545d81a39d49b0f60235d0989fde8cef7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 143, + 493, + 301 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/8c57d1fb38e0e1e293dca4c86c4217f235f55b044f8ab5ea97bb898d85e1410f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 143, + 784, + 301 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/62b755b46a51233b3243b509c6b05f96d0a66578505214f2a96b90101bfefcdb.jpg", + "image_caption": [ + "Fig. 1: Representative results of our algorithm for four distinct noisy inverse problems $(\\sigma = 0.05)$ , showing the ground truth (GT), measurement and reconstruction." + ], + "image_footnote": [], + "bbox": [ + 217, + 303, + 493, + 452 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/1056d6052d01fdcd1d6b2babeeeb7b74701abf7c789a256e5a1ea8ab184b3cdc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 303, + 782, + 452 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "high-dimensional distributions. Linear inverse problems utilize a known forward model given by", + "bbox": [ + 212, + 529, + 784, + 559 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {y} = \\mathbf {A} \\mathbf {x} _ {0} + \\mathbf {n},\n$$\n", + "text_format": "latex", + "bbox": [ + 447, + 561, + 552, + 575 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "and aim to deduce the underlying signal/image $\\mathbf{x}_0\\in \\mathbb{R}^n$ from measurements $\\mathbf{y}\\in \\mathbb{R}^{m}$ , where $\\mathbf{n}\\in \\mathbb{R}^m$ is measurement noise. In practical situations, the forward operator $\\mathbf{A}:\\mathbb{R}^n\\to \\mathbb{R}^m$ is either incomplete or ill-conditioned, necessitating the use of prior information about the signal. Posterior sampling approaches use diffusion models as generative priors and incorporates information from both the data distribution and the forward physics model, allowing for sampling from the posterior distribution $p(\\mathbf{x}|\\mathbf{y})$ using the given measurement $\\mathbf{y}$ [21]. In this context, using Bayes' rule, $p(\\mathbf{x}|\\mathbf{y}) = \\frac{p(\\mathbf{x})p(\\mathbf{y}|\\mathbf{x})}{p(\\mathbf{y})}$ , the problem-specific score is", + "bbox": [ + 212, + 583, + 787, + 710 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x} | \\mathbf {y}) = \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x}), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 331, + 720, + 785, + 738 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\nabla_{\\mathbf{x}_t}\\log p(\\mathbf{x})$ is approximated via the learned score model $s_\\theta (\\mathbf{x}_t,t)$ . Many of these strategies utilize a plug-and-play (PnP) approach, using a pre-trained unconditional diffusion model as a prior [4, 9, 13, 18, 24, 37], and integrate the forward model during inference to address various inverse problem tasks.", + "bbox": [ + 212, + 750, + 784, + 809 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The complexity for these approaches arises in obtaining the latter forward model log-likelihood term in Eq. (1), which guides the diffusion to a target", + "bbox": [ + 212, + 810, + 785, + 839 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 483, + 128 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "class [11, 28]. While exact calculation is intractable, several approaches have been proposed to approximate this term. Among these, RED-diff [25] employs a variational sampler that uses a combination of measurement consistency loss and score matching regularization. Another technique, DSG [43], uses a spherical Gaussian constraint for denoising steps, allowing for larger step sizes. A class of methods utilize projections onto the convex measurement subspace after the unconditional update through score model [5, 8, 34]. Although these projections improve consistency between measurements and the sample, they are noted to lead to artifacts, such as boundary effects [7]. Thus, more recent approaches aimed to approximate the log-likelihood term in Eq. (1) different ways. Noting", + "bbox": [ + 212, + 146, + 787, + 297 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\np _ {t} (\\mathbf {y} \\mid \\mathbf {x} _ {t}) = \\int_ {\\mathbf {x} _ {0}} p \\left(\\mathbf {x} _ {0} \\mid \\mathbf {x} _ {t}\\right) p \\left(\\mathbf {y} \\mid \\mathbf {x} _ {0}\\right) d \\mathbf {x} _ {0}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 375, + 306, + 785, + 338 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "DPS [6] uses the posterior mean $\\hat{\\mathbf{x}}_0 = \\hat{\\mathbf{x}}_0(\\mathbf{x}_t) \\triangleq \\mathbb{E}[\\mathbf{x}_0|\\mathbf{x}_t] = \\mathbb{E}_{\\mathbf{x}_0 \\sim p(\\mathbf{x}_0|\\mathbf{x}_t)}[\\mathbf{x}_0]$ , to approximate $p(\\mathbf{y}|\\mathbf{x}_t) = \\mathbb{E}_{\\mathbf{x}_0 \\sim p(\\mathbf{x}_0|\\mathbf{x}_t)}[p(\\mathbf{y}|\\mathbf{x}_0)]$ as", + "bbox": [ + 214, + 347, + 785, + 380 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\np (\\mathbf {y} | \\mathbf {x} _ {t}) = \\mathbb {E} _ {\\mathbf {x} _ {0} \\sim p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t})} [ p (\\mathbf {y} | \\mathbf {x} _ {0}) ] \\simeq p \\Big (\\mathbf {y} | \\mathbb {E} _ {\\mathbf {x} _ {0} \\sim p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t})} [ \\mathbf {x} _ {0} ] \\Big) = p (\\mathbf {y} | \\hat {\\mathbf {x}} _ {0}).\n$$\n", + "text_format": "latex", + "bbox": [ + 264, + 388, + 740, + 414 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Another technique, IIGDM [31] approximates Eq. (2) as a Gaussian centered around $\\mathbf{A}\\hat{\\mathbf{x}}_0$", + "bbox": [ + 214, + 420, + 784, + 450 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\int_ {\\mathbf {x} _ {0}} p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t}) p (\\mathbf {y} | \\mathbf {x} _ {0}) \\mathbf {d} \\mathbf {x} _ {0} \\simeq \\mathcal {N} (\\mathbf {A} \\hat {\\mathbf {x}} _ {0}, r _ {t} ^ {2} \\mathbf {A} \\mathbf {A} ^ {\\top} + \\sigma_ {y} ^ {2} \\mathbf {I}), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 321, + 458, + 785, + 489 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "and uses it for guidance. In these works, log-likelihood weights (or gradient step sizes), $\\{\\zeta_t\\}$ are introduced to further control the reconstruction as", + "bbox": [ + 214, + 497, + 785, + 527 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x} | \\mathbf {y}) = \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\zeta_ {t} \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x}). \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 330, + 537, + 785, + 554 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While DPS demonstrates high performance in various inverse problem tasks, it suffers from the drawback of requiring a large number of sampling steps, resulting in prolonged reconstruction time. IIGDM accelerates this process by adopting regular (linear) jumps approach across the schedule. However, utilizing more complicated schedules, where the jumps are irregular introduces a challenge, as it requires distinct log-likelihood weights, $\\zeta_t$ , for each timestep. Heuristic adjustment of these weights is difficult and frequently leads to undesirable outcomes. In this work, by taking an inspiration from zero-shot/test-time self-supervised models [35,42] we propose to learn the log-likelihood weights for a fixed number of sampling steps and fine-tune them over a few epochs. It is crucial to note that fine-tuning DPS (or IIGDM) entails saving computational graphs for each unroll, leading to memory issues and slow backpropagation. Thus, we also propose to approximate the Hessian of the data probability using a wavelet-based diagonalization strategy [12], and learn these diagonal values for each timestep as well. Fig. 1 shows representative results for our method. Our key contributions include:", + "bbox": [ + 212, + 561, + 787, + 787 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We introduce zero-shot approximate posterior sampling (ZAPS), leveraging zero-shot learning for dynamic automated hyperparameter tuning in the inference phase to improve solution of noisy inverse problems via diffusion", + "bbox": [ + 225, + 794, + 787, + 840 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 732, + 130 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "models. This method fortifies the robustness of the sampling process, attaining a state-of-the-art performance [6, 21, 31] in sampling outcomes. To the best of our knowledge, our method is the first attempt to learn the log-likelihood weights for solving inverse problems via diffusion models by using a measurement-consistent loss when the sampling noise schedule consists of irregular jumps across timesteps.", + "bbox": [ + 240, + 146, + 787, + 236 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We provide a well-designed approximation for the Hessian of the logarithm of the prior, enabling a computationally efficient and trainable posterior computation.", + "- We showcase the efficacy of incorporating a learnable log-likelihood weights for each diffusion step during the reverse diffusion process through both quantitative and qualitative assessments on FFHQ and ImageNet datasets. Our approach not only outperforms state-of-the-art, but it also substantially reduces the required number of sampling steps from 1000 to $\\sim 20$ -to-30, facilitating convergence with fewer total neural function evaluations (NFEs)." + ], + "bbox": [ + 225, + 237, + 784, + 371 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2 Related Works", + "text_level": 1, + "bbox": [ + 215, + 393, + 395, + 410 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Diffusion Models. During training, diffusion models [16, 34] add Gaussian noise to an image with a fixed increasing variance schedule, e.g. linear or exponential, $\\beta_{1},\\beta_{2},\\dots,\\beta_{T}$ until pure noise is obtained, and learns a reverse diffusion process, where a neural network is trained to gradually remove noise and reconstruct the original image. Let $\\mathbf{x}_0\\sim p_{\\mathrm{data}}(x)$ be samples from the data distribution, and $\\mathbf{x}_{\\{1:T\\}}\\in \\mathbb{R}^d$ be noisy latent variables. By taking $\\alpha_{t} = 1 - \\beta_{t}$ and $\\bar{\\alpha}_{t} = \\prod_{s = 1}^{t}\\alpha_{s}$ , the Markovian forward process can be written as", + "bbox": [ + 214, + 425, + 782, + 534 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nq \\left(\\mathbf {x} _ {t} \\mid \\mathbf {x} _ {0}\\right) = \\mathcal {N} \\left(\\mathbf {x} _ {t} \\mid \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0}, (1 - \\bar {\\alpha} _ {t}) \\mathbf {I}\\right). \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 375, + 545, + 784, + 560 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "By using the reparameterization trick and Eq. (5), $\\mathbf{x}_t$ can be sampled as", + "bbox": [ + 215, + 571, + 736, + 585 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} _ {t} \\left(\\mathbf {x} _ {0}, \\epsilon\\right) = \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0} + \\sqrt {1 - \\bar {\\alpha} _ {t}} \\epsilon \\quad \\text {w h e r e} \\quad \\epsilon \\sim \\mathcal {N} (\\epsilon ; 0, \\mathbf {I}). \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 305, + 597, + 784, + 613 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Consequently, denoising diffusion probabilistic models (DDPMs) [16] learns the reverse process by minimizing a lower bound on the log prior via:", + "bbox": [ + 214, + 623, + 782, + 654 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nL _ {t} (\\theta) = \\mathbb {E} _ {t, \\mathbf {x} _ {0}, \\epsilon} \\| \\epsilon - \\epsilon_ {\\theta} \\left(\\mathbf {x} _ {t} \\left(\\mathbf {x} _ {0}, \\epsilon\\right), t\\right) \\| _ {2} ^ {2}. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 367, + 664, + 784, + 681 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Furthermore, it can be shown that epsilon matching in Eq. (7) is analogous to the denoising score matching (DSM) [32,39] objective up to a constant:", + "bbox": [ + 214, + 691, + 782, + 722 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {\\theta} \\mathbb {E} _ {\\mathbf {x} _ {t}, \\mathbf {x} _ {0}, \\epsilon} \\| \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t) - \\nabla_ {\\mathbf {x} _ {t}} \\log q (\\mathbf {x} _ {t} | \\mathbf {x} _ {0}) \\| _ {2} ^ {2}, \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 349, + 733, + 784, + 753 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "in which $\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t) = -\\frac{\\epsilon_{\\theta}(\\mathbf{x}_t,t)}{\\sqrt{1 - \\bar{\\alpha}_t}}$ . Using Tweedie's formula and Eq. (6), posterior mean for $p(\\mathbf{x}_0|\\mathbf{x}_t)$ can be found as:", + "bbox": [ + 214, + 767, + 782, + 801 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\mathbf {x}} _ {0} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {x} _ {t} + (1 - \\bar {\\alpha} _ {t}) \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t)\\right). \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 377, + 813, + 782, + 842 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 483, + 128 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Sampling $\\mathbf{x}_{t + 1}$ from $p(\\mathbf{x}_{t + 1}|\\mathbf{x}_t)$ can be done using ancestral sampling by iteratively computing:", + "bbox": [ + 214, + 146, + 782, + 176 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} _ {t - 1} = \\frac {1}{\\sqrt {\\alpha_ {t}}} \\left(\\mathbf {x} _ {t - 1} - \\frac {1 - \\alpha_ {t}}{\\sqrt {1 - \\bar {\\alpha} _ {t}}} \\boldsymbol {\\epsilon} _ {\\theta} (\\mathbf {x} _ {t}, t)\\right) + \\sigma_ {t} \\mathbf {z}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 334, + 186, + 785, + 218 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{z} \\sim \\mathcal{N}(0, \\mathbf{I})$ and $\\sigma_t^2 = \\tilde{\\beta}_t = \\frac{1 - \\bar{\\alpha}_{t-1}}{1 - \\bar{\\alpha}_t} \\beta_t$ . It is also worth noting that the DDPM is equivalent to the variance preserving stochastic differential equations (VP-SDEs) [34].", + "bbox": [ + 214, + 227, + 782, + 275 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Solving Inverse Problems via Diffusion Models. When solving inverse problems via diffusion models, the main challenge is to find an approximation to the log-likelihood term, $\\nabla_{\\mathbf{x}_t}\\log p(\\mathbf{y}|\\mathbf{x})$ , as discussed earlier. One recent method, denoising diffusion restoration models (DDRM) [21], utilizes a spectral domain approach, allowing the incorporation of noise from the measurement domain into the spectral domain through singular value decomposition (SVD). However, the application of SVD is computationally expensive [6]. Manifold Constrained Gradient (MCG) [7] method applies projections after the MCG correction as:", + "bbox": [ + 214, + 294, + 784, + 415 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} _ {t - 1} ^ {\\prime} = f (\\mathbf {x} _ {t}, \\mathbf {s} _ {\\theta}) - \\zeta \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {K} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\| _ {2} ^ {2} + g (\\mathbf {x} _ {t}) \\mathbf {z}, \\quad \\mathbf {z} \\sim \\mathcal {N} (0, \\mathbf {I}), \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 258, + 422, + 784, + 443 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} _ {t - 1} = \\mathbf {H} \\mathbf {x} _ {t - 1} + \\mathbf {b}, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 256, + 445, + 784, + 460 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\zeta$ and $\\mathbf{H}$ are dependent on noise covariance. MCG update of Eq. (11) projects estimates onto the measurement subspace, thus they may fall off from the data manifold [6]. Hence, DPS proposes to update without projections as:", + "bbox": [ + 214, + 469, + 782, + 515 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} _ {t - 1} = \\mathbf {x} _ {t - 1} ^ {\\prime} - \\zeta_ {t} \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0} \\| _ {2} ^ {2}, \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 383, + 525, + 784, + 542 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Note Eq. (13) is equivalent to Eq. (11) when $\\mathbf{K} = \\mathbf{I}$ , and it reduces to the following when the forward operator is linear:", + "bbox": [ + 214, + 551, + 782, + 580 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} _ {t - 1} = \\mathbf {x} _ {t - 1} ^ {\\prime} + \\zeta_ {t} \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 375, + 589, + 784, + 619 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "IIGDM [31], on the other hand, utilizes a Gaussian centered around $\\hat{\\mathbf{x}}_0$ that is defined in Eq. (9) to obtain the following score approximation:", + "bbox": [ + 214, + 627, + 782, + 657 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} \\left(r _ {t} ^ {2} \\mathbf {A} \\mathbf {A} ^ {\\top} + \\sigma_ {y} ^ {2} \\mathbf {I}\\right) ^ {- 1} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}). \\tag {15}\n$$\n", + "text_format": "latex", + "bbox": [ + 303, + 667, + 784, + 696 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In cases where there is no measurement noise $(\\sigma_y = 0)$ , Eq. (15) simplifies to:", + "bbox": [ + 214, + 705, + 771, + 722 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq r _ {t} ^ {- 2} \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\dagger} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\tag {16}\n$$\n", + "text_format": "latex", + "bbox": [ + 359, + 729, + 784, + 760 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{A}^{\\dagger}$ denotes the Moore-Penrose pseudoinverse of $\\mathbf{A}$ . We note that using Woodbury matrix identity (derived in SuppMat), one can simplify Eq. (15) to:", + "bbox": [ + 214, + 768, + 782, + 800 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\left(\\mathbf {A} ^ {\\top} \\mathbf {A} + \\eta \\mathbf {I}\\right) ^ {- 1} \\mathbf {A} ^ {\\top} \\left(\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}\\right), \\quad \\text {w h e r e} \\eta = \\frac {\\sigma_ {y} ^ {2}}{r _ {t} ^ {2}}. \\tag {17}\n$$\n", + "text_format": "latex", + "bbox": [ + 238, + 809, + 782, + 842 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 732, + 128 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "From Eq. (17), the similarity between DPS and IIGDM updates can be seen, with $(\\mathbf{A}^{\\top}\\mathbf{A} + \\eta \\mathbf{I})^{-1}$ term being the difference. Note the DPS update in Eq. (13) works with non-linear operators, while IIGDM's update does not rely on the differentiability of the forward operator, as long as a pseudo-inverse-like operation can be derived.", + "bbox": [ + 212, + 146, + 787, + 220 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Improved Irregular Noise Schedules for Image Generation. Diffusion models typically utilize well-defined fixed noise schedules, with examples including linear or exponential ones. Lately, more sophisticated methods have been developed that sweep across these schedules and take samples in irregular timesteps [11,19] for unconditional image generation. The idea behind this strategy hinges on more frequent sampling for lower noise levels, making it possible to use considerably less number of sampling steps.", + "bbox": [ + 212, + 244, + 782, + 349 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Most of the aforementioned studies that solve inverse problems via diffusion models used the same number of steps that the unconditional diffusion model was trained for [6,7,34]. Nonetheless, there has been a notable trend favoring shorter schedules characterized by linear jumps for inverse problems, where the log-likelihood weights were hand-tuned by trial-and-error [25,31] when using reduced number of steps. While these approaches have proven effective, they still require a large number of sampling steps or heuristic tuning of the log-likelihood weights, $\\{\\zeta_t\\}$ in Eq. (4) to achieve good performance. The former issue leads to lengthy and potentially impractical computational times, while the latter issue results in generalizability difficulties for adoption at different measurement noise levels and variations in the measurement operators. Furthermore, the irregular jump strategy that has been powerful for image generation has not garnered significant attention for inverse problems, mainly due to the impracticality of empirically tuning the log-likelihood weights. Thus, a method that automatically selects and adjusts log-likelihood weights based on the provided measurements for arbitrary noise schedules, instead of requiring manual tuning, holds significant potential for improving robustness and image quality.", + "bbox": [ + 212, + 351, + 787, + 608 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3 Methodology", + "text_level": 1, + "bbox": [ + 215, + 631, + 380, + 648 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.1 Zero-shot Fine Tuning of Log-Likelihood Weights", + "text_level": 1, + "bbox": [ + 214, + 662, + 668, + 679 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In this work, we propose a robust automated approach for setting the log-likelihood weights at each timestep for arbitrary noise sampling schedules to improve posterior sampling with the given measurements during inference. This allows for a stable reconstruction for different sweeps across noise schedules. Furthermore, the weights themselves are image-specific, which improves the performance compared to the former approaches. For estimating the likelihood in Eq. (1), we use the update in DPS [6]:", + "bbox": [ + 212, + 689, + 787, + 796 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0} \\| _ {2} ^ {2} = - \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}), \\tag {18}\n$$\n", + "text_format": "latex", + "bbox": [ + 295, + 806, + 785, + 837 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 483, + 128 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/242ba34d8be7399d5f13e12aca23330871721cfa86e1c2fb615b139b45b810be.jpg", + "image_caption": [ + "Fig. 2: Our zero-shot approximate posterior sampling (ZAPS) approach unrolls the sampling process for a fixed number of $S$ steps for arbitrary/irregular noise schedules, alternating between score model sampling (SMS) and likelihood guidance (LG). Our zero-shot fine-tuning approach has two key components: 1) The Hessian of the log prior is approximated using a discrete wavelet transform diagonalization technique, 2) Both the diagonal matrices, $\\{\\mathbf{D}_t\\}$ and the log-likelihood weights, $\\{\\zeta_t\\}$ are updated during fine-tuning. The fine-tuning is done for a fixed number of epochs with a given NFE budget, yielding a faster and more robust adaptive inverse problem solver." + ], + "image_footnote": [], + "bbox": [ + 233, + 142, + 772, + 349 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "although as noted before, the IIGDM [31] update in Eq. (17) is also similar. Thus we emphasize that while we chose DPS as baseline for its versatility in inverse problems, our ZAPS strategy is applicable to other diffusion models for inverse problems. Recalling the definition of $\\hat{\\mathbf{x}}_0$ in Eq. (9), we note", + "bbox": [ + 214, + 500, + 787, + 560 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + (1 - \\bar {\\alpha} _ {t}) \\frac {\\partial \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t} , t)}{\\partial \\mathbf {x} _ {t}}\\right). \\tag {19}\n$$\n", + "text_format": "latex", + "bbox": [ + 359, + 571, + 785, + 604 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Thus, ignoring the calculation and storage of the matrix $\\frac{\\partial\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)}{\\partial\\mathbf{x}_t}$ for now, one needs to fine tune the log-likelihood weights $\\{\\zeta_t\\}$ in", + "bbox": [ + 214, + 616, + 787, + 650 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\zeta_ {t} \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + (1 - \\bar {\\alpha} _ {t}) \\frac {\\partial \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t} , t)}{\\partial \\mathbf {x} _ {t}}\\right) \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}). \\qquad (2 0)\n$$\n", + "text_format": "latex", + "bbox": [ + 277, + 659, + 785, + 694 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "This is done based on the concept of algorithm unrolling [14, 15, 22] in physics-driven deep learning by fixing the number of sampling steps $T$ . Then the whole posterior sampling process is described as alternating between DDPM sampling using the pre-trained unconditional score model, followed by the log-likelihood term guidance in Eq. (20) for $T$ steps. This \"unrolled\" network is fine-tuned end-to-end, where the only updates are made to $\\{\\zeta_t\\}$ and no fine-tuning is performed on the unconditional score function, $\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)$ . This also alleviates the need for backpropagation across the score function network, leading to further savings in computational time. The fine-tuning is performed using a physics-inspired loss", + "bbox": [ + 214, + 703, + 787, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 732, + 128 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 114, + 784, + 126 + ], + "page_idx": 6 + }, + { + "type": "code", + "sub_type": "algorithm", + "code_caption": [ + "Algorithm 1 ZAPS: Zero-Shot Approximate Posterior Sampling" + ], + "code_body": "Require: $T,\\mathbf{y},\\{\\tilde{\\sigma}_i\\}_{i = 1}^T$ orthogonal DWT (W) \n1: $\\mathbf{x}_T\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})$ \n2: $\\tau \\subset [1,\\dots,T]$ extending over a length of $S < T$ \n3: for epoch in range(epochs) do \n4: for $i = S,\\ldots ,1$ do \n5: $\\hat{\\mathbf{s}}\\gets \\mathbf{s}_{\\theta}(\\mathbf{x}_{\\tau_i},\\tau_i)$ ▷ Score computation \n6: $\\hat{\\mathbf{x}}_0\\leftarrow \\frac{1}{\\sqrt{\\bar{\\alpha}_{\\tau_i}}} (\\mathbf{x}_{\\tau_i} + (1 - \\bar{\\alpha}_{\\tau_i})\\hat{\\mathbf{s}})$ Tweedie denoising \n7: $\\mathbf{z}\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})$ if $\\tau_{i} > 1$ , else $\\mathbf{z} = \\mathbf{0}$ \n8: $\\mathbf{x}_{\\tau_i - 1}'\\gets \\frac{\\sqrt{\\alpha_{\\tau_i}}(1 - \\bar{\\alpha}_{\\tau_i - 1})}{1 - \\bar{\\alpha}_{\\tau_i}}\\mathbf{x}_{\\tau_i} + \\frac{\\sqrt{\\bar{\\alpha}_{\\tau_i - 1}}\\beta_{\\tau_i}}{1 - \\bar{\\alpha}_{\\tau_i}}\\hat{\\mathbf{x}}_0 + \\tilde{\\sigma}_{\\tau_i}\\mathbf{z}$ \n9: $\\mathbf{x}_{\\tau_{i - 1}}\\gets \\mathbf{x}_{\\tau_{i - 1}}' + \\zeta_{\\tau_i}\\left(\\left(\\frac{1}{\\sqrt{\\bar{\\alpha}_{\\tau_i}}}\\Bigl {(}\\mathbf{I} + (1 - \\bar{\\alpha}_{\\tau_i})\\mathbf{WD}_{\\tau_i}\\mathbf{W}^\\top \\Bigr)\\right)\\cdot \\mathbf{A}^\\top (\\mathbf{y} - \\mathbf{A}\\hat{\\mathbf{x}}_0)\\right)$ \n10: end for \n11: Update network parameters $\\{\\zeta_t\\}$ and $\\{\\mathbf{D}_t\\}$ \n12: end for \n13: return ${\\bf x}_0$", + "bbox": [ + 215, + 164, + 784, + 412 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "function that evaluates the consistency of the final estimate and the measurements: $\\mathcal{L}(\\mathbf{y},\\mathbf{x}_0) = ||\\mathbf{y} - \\mathbf{A}\\mathbf{x}_0||_2^2$ . High-level explanation for our algorithm is given in Fig. 2.", + "bbox": [ + 212, + 441, + 787, + 488 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "3.2 Approximation for the Hessian of the Log Prior", + "text_level": 1, + "bbox": [ + 214, + 511, + 656, + 527 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Implementing the zero-shot update for Eq. (20) poses various challenges, since backpropagation through the unrolled network to update all $\\{\\zeta_t\\}$ requires another backpropagation through the Jacobian of the score function at each time step. This can only be done by retaining the computational graphs that are created when calculating the Jacobian term in Eq. (20), which quickly explodes memory requirements, especially when the number of sampling steps increases. Also, backpropagating through multiple graphs at the end to only update the log-likelihood weights is time-inefficient and causes prolonged sampling times. Hence, we propose to approximate the Jacobian using inspirations from wavelet-based signal processing techniques and propose to learn this approximation to improve the overall outcome. Noting that $\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)$ in Eq. (19) is an approximation of the log-gradient of the true prior $p(\\mathbf{x})$ , we have", + "bbox": [ + 212, + 537, + 787, + 720 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + \\left(1 - \\bar {\\alpha} _ {t}\\right) \\frac {\\partial^ {2} \\log p _ {t} (\\mathbf {x} _ {t})}{\\partial \\mathbf {x} _ {t} ^ {2}}\\right). \\tag {21}\n$$\n", + "text_format": "latex", + "bbox": [ + 349, + 729, + 785, + 766 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In order to make a backpropagation to update these weights, one needs to calculate the Hessian matrix, $\\frac{\\partial^2\\log p_t(\\mathbf{x}_t)}{\\partial\\mathbf{x}_t^2}$ given in Eq. (21). This matrix is the negative of the observed Fisher information matrix, whose expected value is the Fisher information matrix. It is also known that in the limit, it approximates", + "bbox": [ + 212, + 773, + 787, + 842 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 483, + 128 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "the inverse covariance matrix of the maximum likelihood estimator. Furthermore, under mild assumptions about continuity of the prior, the observed Fisher information matrix is symmetric. Thus, an appropriate decorrelating unitary matrix can be used to diagonalize it. While finding the desired unitary matrix is equally time-consuming as calculating this Hessian, several pre-determined unitary transforms have been proposed for decorrelation in the signal processing community for different applications [12, 27, 36]. Of particular note is the use of unitary wavelet transforms for Wiener filtering [12], where these transforms were utilized for their tendency to decorrelate data, i.e. approximate the Karhunen-Loeve transform [27]. In this work, we also use these decorrelating properties to approximately diagonalize the Hessian of the log prior, $\\frac{\\partial^2\\log p_t(\\mathbf{x}_t)}{\\partial\\mathbf{x}_t^2}$ using fixed orthogonal discrete wavelet transforms (DWT):", + "bbox": [ + 212, + 146, + 787, + 334 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {\\partial^ {2} \\log p _ {t} (\\mathbf {x} _ {t})}{\\partial \\mathbf {x} _ {t} ^ {2}} \\simeq \\mathbf {W D} _ {t} \\mathbf {W} ^ {\\top}, \\tag {22}\n$$\n", + "text_format": "latex", + "bbox": [ + 408, + 345, + 785, + 380 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "where $\\mathbf{W}$ is an orthogonal DWT. By making this approximation, backpropagation through the score model can also be avoided, and only the diagonal values in distinct $\\{\\mathbf{D}_t\\}$ matrices needs to be learned. Our final algorithm to sample from pure noise with fine-tuning is given in Algorithm 1.", + "bbox": [ + 212, + 388, + 787, + 450 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4 Evaluation", + "text_level": 1, + "bbox": [ + 214, + 474, + 356, + 489 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Experimental Setup and Implementation Details", + "text_level": 1, + "bbox": [ + 214, + 507, + 663, + 523 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We comprehensively evaluated our method, examining its performance through both qualitative and quantitative analyses using FFHQ [20] and ImageNet [10] datasets with size $256 \\times 256 \\times 3$ . Pre-trained unconditional diffusion models trained on FFHQ and ImageNet were taken from [5] and [11] respectively, and used without retraining. For our experiments, we sampled 1000 images from FFHQ and ImageNet validation sets. All images underwent pre-processing to be normalized in the range [0, 1]. During all the evaluations, a Gaussian measurement noise with $\\sigma = 0.05$ was used. For the orthogonal DWT, Daubechies 4 wavelet was utilized. For our quantitative evaluations, we employed 30 sampling steps with a schedule of \"15,10,5\", and 10 epochs for fine-tuning, resulting in a total of 300 NFEs. As noted in [11], superior schedules may exist but it requires substantial computational time to try out all possible schedules. Thus, we opted a schedule that is simple, and samples more frequently at the lower noise levels [11]. More details about the network architectures and hyperparameter choices are given in SuppMat.", + "bbox": [ + 212, + 532, + 787, + 760 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.2 Experiments on Linear Inverse Problems", + "text_level": 1, + "bbox": [ + 214, + 782, + 602, + 799 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Problem Setup. We focused on the following linear inverse problems: (1) Gaussian deblurring, (2) inpainting, (3) motion deblurring, (4) super-resolution. For", + "bbox": [ + 212, + 809, + 785, + 840 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 732, + 130 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/68ac876e9b20b87d5143cb34697d514dd24508f841e050a6921908eb902ce19e.jpg", + "image_caption": [ + "Fig. 3: Representative images using various methods for solving Gaussian deblurring, motion deblurring and super-resolution $(\\times 4)$ tasks. Proposed method qualitatively improves upon each method, including the baseline state-of-the-art DPS." + ], + "image_footnote": [], + "bbox": [ + 217, + 143, + 527, + 344 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/b8884b8c36364b51787cff387317de22dfd9fb090561818691373d982391b917.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 531, + 143, + 784, + 345 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Gaussian deblurring, we considered a kernel of size $61 \\times 61$ with a standard deviation $\\sigma = 3.0$ . For inpainting, we considered two different scenarios wherein we randomly masked out $70\\%$ and a $128 \\times 128$ box region of the image, applied uniformly across all three channels. For motion blur, we generated the blur kernel via the code1, with $61 \\times 61$ kernel size and 0.5 intensity, as in [6]. Finally, for super-resolution, we considered bicubic downsampling. All measurements are obtained through applying the forward model to the ground truth image.", + "bbox": [ + 212, + 422, + 787, + 530 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Comparison Methods. We compared our method with score-SDE [5, 8, 34], manifold constrained gradients (MCG) [7], denoising diffusion restoration models (DDRM) [21], diffusion posterior sampling (DPS) [6] and pseudo-inverse guided diffusion models (IIGDM) [31]. We note that our implementation of score-SDE follows the same strategy as presented in [6]. We referred to the methods that iteratively applied projections onto convex sets (POCS) as score-SDE. Additional comparisons to DDNM [40] and DiffPIR [44] are also provided in SuppMat. All methods were implemented using their respective public repositories.", + "bbox": [ + 212, + 544, + 787, + 666 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Quantitative and Qualitative Results. We evaluated our method quantitatively using learned perceptual image patch similarity (LPIPS) distance, structural similarity index (SSIM), and peak signal-to-noise-ratio (PSNR). Representative results in Fig. 3 show that DDRM yields blurry results in Gaussian deblurring task. DPS improves sharpness across these distinct inverse problem tasks, while ZAPS yields comparable sharpness while exhibiting a higher similarity to the ground truth, all within a third of the total NFEs.", + "bbox": [ + 212, + 680, + 787, + 785 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Representative inpainting results in Fig. 4 show that ZAPS substantially improves upon DDRM, a method that uses a slightly lower 20 timesteps, and", + "bbox": [ + 212, + 786, + 787, + 816 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 482, + 128 + ], + "page_idx": 9 + }, + { + "type": "page_footnote", + "text": "1 https://github.com/LeviBorodenko/motionblur", + "bbox": [ + 217, + 823, + 566, + 840 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/2835cfcdf4396fed5657ea1133a42e1902697ccda5b423764b6d789429bc3f45.jpg", + "image_caption": [ + "Fig. 4: Illustrative images using state-of-the-art methods for random (70%) and box $(128 \\times 128)$ inpainting. Proposed method improves upon DDRM, while achieving similar performance to IIGDM and DPS, with subtle improvements shown in zoomed insets." + ], + "image_footnote": [], + "bbox": [ + 217, + 143, + 782, + 334 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "achieves better similarity to the ground truth and sharpness compared to DPS, which uses almost $33 \\times$ more steps. Similarly, when compared with IIIGDM, it is evident that our method gives comparable results even though $3 - 4 \\times$ fewer number of steps are used. The zoomed insets highlight subtle improvements afforded by our method compared to state-of-the-art DPS and IIIGDM, as seen around the eyes.", + "bbox": [ + 212, + 421, + 782, + 512 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Tab. 1 and Tab. 2 show the three quantitative metrics for all methods, while Tab. 3 illustrates their computational complexity. ZAPS outperforms Score-SDE, MCG, and our baseline state-of-the-art comparison, DPS, in computational complexity and quantitative performance, yielding faster and improved reconstructions. Although DDRM and IIGDM surpass ZAPS in terms of computational complexity, ZAPS outperforms both methods quantitatively in terms of all three metrics. Furthermore, IIGDM could not be implemented reliably for several lin", + "bbox": [ + 212, + 513, + 784, + 619 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/f31af8d19dc5db6520edaa7f31d87a85029dc25cce7f9f76c8441e86dc8f9c33.jpg", + "table_caption": [ + "Table 1: Quantitative results for Gaussian deblurring and random inpainting (70%) on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task." + ], + "table_footnote": [], + "table_body": "
MethodGaussian DeblurringRandom Inpainting
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1280.71825.200.1040.81128.03
MCG [7]0.5580.50915.120.1450.75425.33
IIGDM [31]---0.0860.84226.62
DDRM [21]0.1830.70224.420.1980.74125.17
Score-SDE [5,8,34]0.5710.49615.170.2240.71824.44
ZAPS (Ours)0.1210.75726.060.0780.81327.79
", + "bbox": [ + 215, + 696, + 785, + 838 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 732, + 128 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 114, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/aec9a81b039140e37ffe8bfdda7ee69b1a75768ae66e80ffdfb4404b823626ab.jpg", + "table_caption": [ + "Table 2: Quantitative results for motion deblurring and super-resolution $(\\times 4)$ on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task." + ], + "table_footnote": [], + "table_body": "
MethodMotion DeblurringSuper-Resolution (×4)
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1430.70424.030.1680.71923.86
MCG [7]0.5650.49715.100.2290.62320.74
IIGDM [31]---0.1310.76024.48
DDRM [21]---0.1750.71124.55
Score-SDE [5,8,34]0.5460.48815.020.2570.60919.13
ZAPS (Ours)0.1410.70924.160.1040.76826.63
", + "bbox": [ + 217, + 191, + 785, + 333 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "ear inverse problems related to deblurring. We also note that the parameters in ZAPS are adaptive, meaning one can reach the same computational complexity by adjusting total epochs or steps, in trade-off for a slight decrease in performance, as studied in Sec. 4.3.", + "bbox": [ + 215, + 359, + 785, + 419 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "4.3 Ablation Studies", + "text_level": 1, + "bbox": [ + 217, + 440, + 400, + 454 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We conducted three distinct ablation studies to investigate critical aspects of our algorithm's performance. The first ablation study compared combinations of different timesteps and epochs with a fixed NFE budget, providing a nuanced exploration into the influence of specific combinations on the model's behavior. Specifically, we explored the reconstruction capabilities of the model qualitatively and quantitatively by varying the length of model timesteps, $S \\in \\{20, 30, 60\\}$ . For a fixed NFE budget of 300, these corresponded to 15, 10 and 5 epochs for zero-shot fine-tuning respectively. Fig. 5a shows the final estimates, while Fig. 5b and Fig. 5c depict the corresponding loss and PSNR curves for each combination (Further quantitative results are in SuppMat). Notably, all the estimates are similar, though sharpness improves slightly as $S$ increases. However, the trade-off for choosing a high $S$ is the low number of epochs. Especially for cases, where the measurement system or noise level changes, this makes fine-tuning susceptible to initialization of the hyperparameters as it is more difficult to converge to a good solution in $\\sim 5$ epochs. Thus, for improved generalizability and robustness, we opted to use $S = 30$ and 10 epochs for our database testing.", + "bbox": [ + 215, + 464, + 785, + 705 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Our second ablation study analyzed the performance of ZAPS with respect to other state-of-the-art methods when all methods used the same NFE. We", + "bbox": [ + 215, + 705, + 785, + 734 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/68517ebd14164e117a69f3161cb830c9ee3aaf03bafeb0a3822b8e527de5f9c4.jpg", + "table_caption": [ + "Table 3: Computational costs of methods in terms of NFEs and wall-clock time (WCT)" + ], + "table_footnote": [], + "table_body": "
DPS [6]MCG [7]IIGDM [31]DDRM [21]Score-SDE [34]ZAPS
Total NFEs10001000100201000300
WCT (s)47.2548.834.532.1223.4714.71
", + "bbox": [ + 222, + 777, + 781, + 838 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 482, + 128 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/f64c83c40a88994e2dfaca8aa2cf03c018453506feb33fdb601dd36a44708b88.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 240, + 146, + 346, + 238 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/34247b72ece3fd158f11739e848980b0f8f1537557345aff54081f9874cd768f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 351, + 146, + 454, + 238 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/b5ec4d89e67965124a53b67f88b3f1dcf59387944f65e4d7ef87eb1ce5daac42.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 457, + 146, + 557, + 238 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/50569c3f807ccef899a707036a78ec87ac4d452541ecc509f33c5e90d7a63822.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 560, + 146, + 658, + 238 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/599f2ad08f66414ab14bbd05dc763ce73f2e572715b18cd27c6c97d2b6c2e7a4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 660, + 146, + 761, + 238 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/4a77e8cb353e83e072792abd2c448dd80a1b0c817e8cbb7c219f9385ea928ca7.jpg", + "image_caption": [ + "(a) Re constructions using ZAPS for super-resolution $(\\times 4)$ task with different total timesteps-epochs combinations for the same $\\mathrm{NFE} = 300$", + "(b) Loss graphs for each combination.", + "Fig. 5: Study on different epochs and sampling steps combinations with fixed NFE. Results show similar quality for combinations with lower timestep approaches staring from higher loss/lower PSNR but converging to similar values." + ], + "image_footnote": [], + "bbox": [ + 263, + 263, + 488, + 397 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/e48bd80e110129c20ea6cd0278a99d0542409bd5a9b310897a5db284d56c8db7.jpg", + "image_caption": [ + "(c) PSNR graphs for each combination." + ], + "image_footnote": [], + "bbox": [ + 514, + 265, + 740, + 397 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "investigated total NFEs of 100, 300, and 500 to demonstrate the robustness of our approach, given its adaptable parameters, as previously discussed. For 100 NFEs, we applied 20 steps (schedule = \"10,7,3\") with 5 epochs, whereas for 300 and 500 NFEs, we applied 30 steps (schedule = \"15,10,5\") and 50 steps (schedule = \"30,15,5\"), respectively, for 10 epochs. Additionally, we also implemented ZAPS with uniformly spaced noise schedules to highlight the benefits of the proposed irregular noise schedules. As seen in Tabs. 4 and 5, ZAPS with irregular noise schedules outperforms the state-of-the-art methods for NFE budgets of 100, 300 and 500 in super-resolution and random inpainting tasks. We note that we could not perform this test for deblurring experiments as IIGDM could not be implemented reliably across the database, as previously mentioned. We also note that the difference between irregular and uniform noise schedules for ZAPS is", + "bbox": [ + 212, + 498, + 787, + 680 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/f41d9c78e14f9445cb4cf2deeea41cdcada8dff391b7ab51fbb353a6836aed90.jpg", + "table_caption": [ + "Table 4: Quantitative results for super-resolution $(\\times 4, \\sigma = 0.05)$ on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined." + ], + "table_footnote": [], + "table_body": "
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.3440.47816.960.2570.57720.010.2180.62321.52
IIGDM [31]0.1310.76024.480.1170.75824.800.1230.76224.25
ZAPS (Uniform)0.1080.74925.920.1190.72926.290.1150.75625.63
ZAPS (Irregular)0.1060.74126.080.1040.76826.630.0950.77026.26
", + "bbox": [ + 217, + 747, + 784, + 838 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 732, + 128 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/1e787a9dbb9a7f34b6a9af07304e54281b2add3c001034bcfad55de345ee6899.jpg", + "table_caption": [ + "Table 5: Quantitative results for random inpainting (70%, σ = 0.05) on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined." + ], + "table_footnote": [], + "table_body": "
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.2680.59320.010.1890.70423.740.1520.75425.59
IIGDM [31]0.0860.84226.620.0800.84925.060.0820.84524.94
ZAPS (Uniform)0.1220.78026.200.1270.77325.870.0800.79126.94
ZAPS (Irregular)0.0850.79427.030.0780.81327.790.0710.81828.11
", + "bbox": [ + 217, + 184, + 782, + 275 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "less pronounced for 100 NFEs, but the advantage of irregular schedules becomes apparent for 300 and 500 NFEs.", + "bbox": [ + 215, + 299, + 782, + 328 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The final ablation study, exploring the benefits of using distinct weights $\\zeta_t$ for each timestep versus a shared weight $\\zeta$ for every step, is provided in SuppMat.", + "bbox": [ + 215, + 329, + 782, + 359 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "4.4 Limitations", + "text_level": 1, + "bbox": [ + 215, + 378, + 354, + 391 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The loss function we use, $\\mathcal{L}(\\mathbf{y},\\mathbf{x}_0) = ||\\mathbf{y} - \\mathbf{A}\\mathbf{x}_0||_2^2$ , resembles a deep image prior-like loss [38]. However, note that there is a subtle difference in our context, where it corresponds to the log-likelihood of $p(\\mathbf{y}|\\mathbf{x}_0)$ , which is different then the (approximate) log-likelihood guidance term $p(\\mathbf{y}|\\mathbf{x}_t)$ used at each time-step. This allows for more robustness to overfitting that is typically observed in DIP-type methods. Further overfitting avoidance measures can be taken by data-splitting [3, 23, 26, 41, 42], though this was not necessary for the small number of epochs used for fine-tuning. Additionally, while our approximation in Eq. (22) produces competitive results, it is important to keep in mind that wavelets may not fully decorrelate the observed Fisher information matrix. Finally, we note that while we chose DPS as a baseline for its versatility in inverse problem tasks, the adaptive weighting strategy in ZAPS, as well as our Hessian approximation, are applicable to other posterior sampling diffusion models for inverse problems.", + "bbox": [ + 215, + 398, + 785, + 595 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 215, + 614, + 356, + 630 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In this work, we proposed a novel approach named zero-shot approximate posterior sampling (ZAPS), which harnesses zero-shot learning for dynamic automated hyperparameter tuning during the inference phase to enhance the reconstruction quality of solving linear noisy inverse problems using diffusion models. In particular, learning the log-likelihood weights facilitates the usage of more complex and irregular noise schedules, whose feasibility for inverse problems was shown, to the best of our knowledge, for the first time in this paper. These irregular noise schedules enabled high quality reconstructions with $20 - 50 \\times$ fewer timesteps. When number of epochs for fine-tuning is also considered, our approach results in a speed boost of approximately $3 \\times$ compared to state-of-the-art methods like DPS. Quantitative and qualitative evaluations on natural images illustrate our method's ability to attain state-of-the-art performance across diverse inverse problem tasks.", + "bbox": [ + 215, + 643, + 785, + 839 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 482, + 128 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 217, + 143, + 401, + 162 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "This work was partially supported by NIH R01HL153146 and NIH R01EB032830.", + "bbox": [ + 215, + 176, + 785, + 191 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 214, + 321, + 229 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Alcaraz, J.M.L., Strodthoff, N.: Diffusion-based time series imputation and forecasting with structured state space models. arXiv preprint arXiv:2208.09399 (2022)", + "2. Baranchuk, D., Rubachev, I., Voynov, A., Khrulkov, V., Babenko, A.: Label-efficient semantic segmentation with diffusion models. International Conference on Learning Representations (2021)", + "3. Batson, J., Royer, L.: Noise2self: Blind denoising by self-supervision. In: International Conference on Machine Learning. pp. 524-533. PMLR (2019)", + "4. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play admm for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging 3(1), 84-98 (2016)", + "5. Choi, J., Kim, S., Jeong, Y., Gwon, Y., Yoon, S.: Ilvr: Conditioning method for denoising diffusion probabilistic models. in 2021 ieee. In: CVF international conference on computer vision (ICCV). pp. 14347-14356 (2021)", + "6. Chung, H., Kim, J., Mccann, M.T., Klasky, M.L., Ye, J.C.: Diffusion posterior sampling for general noisy inverse problems. International Conference on Learning Representations (2023)", + "7. Chung, H., Sim, B., Ryu, D., Ye, J.C.: Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems (2022)", + "8. Chung, H., Sim, B., Ye, J.C.: Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)", + "9. Cohen, R., Blau, Y., Freedman, D., Rivlin, E.: It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Advances in Neural Information Processing Systems 34, 18152-18164 (2021)", + "0. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009)", + "1. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021)", + "2. Ghael, S., Sayeed, A.M., Baraniuk, R.G.: Improved wavelet denoising via empirical wiener filtering. In: SPIE Technical Conference on Wavelet Applications in Signal Processing (1997)", + "3. Graikos, A., Malkin, N., Jojic, N., Samaras, D.: Diffusion models as plug-and-play priors. Advances in Neural Information Processing Systems 35, 14715-14728 (2022)", + "4. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th international conference on international conference on machine learning. pp. 399-406 (2010)", + "5. Hammernik, K., Küstner, T., Yaman, B., Huang, Z., Rueckert, D., Knoll, F., Akçakaya, M.: Physics-driven deep learning for computational magnetic resonance imaging. IEEE Sig Proc Mag 40, 98-114 (2023)" + ], + "bbox": [ + 225, + 244, + 784, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 730, + 128 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "16. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)", + "17. Hoogeboom, E., Nielsen, D., Jaini, P., Forre, P., Welling, M.: Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems 34, 12454-12465 (2021)", + "18. Kadkhodaie, Z., Simoncelli, E.: Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. Advances in Neural Information Processing Systems 34, 13242-13254 (2021)", + "19. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022)", + "20. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 4396-4405 (2019)", + "21. Kawar, B., Elad, M., Ermon, S., Song, J.: Denoising diffusion restoration models. In: Advances in Neural Information Processing Systems (2022)", + "22. Knoll, F., Hammernik, K., Zhang, C., Moeller, S., Pock, T., Sodickson, D.K., Akçakaya, M.: Deep learning methods for parallel magnetic resonance imaging reconstruction. IEEE Sig Proc Mag 37, 128-140 (2020)", + "23. Krull, A., Buchholz, T.O., Jug, F.: Noise2void-learning denoising from single noisy images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2129-2137 (2019)", + "24. Laumont, R., Bortoli, V.D., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: Bayesian imaging using plug & play priors: when Langevin meets tweedie. SIAM Journal on Imaging Sciences 15(2), 701-737 (2022)", + "25. Mardani, M., Song, J., Kautz, J., Vahdat, A.: A variational perspective on solving inverse problems with diffusion models. arXiv preprint arXiv:2305.04391 (2023)", + "26. Moran, N., Schmidt, D., Zhong, Y., Coady, P.: Noisier2noise: Learning to denoise from unpaired noisy data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12064-12072 (2020)", + "27. Qu, Y., Zheng, N., Li, C.: Using wavelet transform to estimate the eigenfunctions of karhunen-loeve expansion. In: Wavelet Analysis and Its Applications, and Active Media Technology, pp. 39-44. World Scientific (2004)", + "28. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International conference on machine learning. pp. 2256-2265. PMLR (2015)", + "29. Song, B., Kwon, S.M., Zhang, Z., Hu, X., Qu, Q., Shen, L.: Solving inverse problems with latent diffusion models via hard data consistency. arXiv preprint arXiv:2307.08123 (2023)", + "30. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. International Conference on Learning Representations (2020)", + "31. Song, J., Vahdat, A., Mardani, M., Kautz, J.: Pseudoinverse-guided diffusion models for inverse problems. In: International Conference on Learning Representations (2022)", + "32. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)", + "33. Song, Y., Shen, L., Xing, L., Ermon, S.: Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005 (2021)", + "34. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. International Conference on Learning Representations (2020)" + ], + "bbox": [ + 215, + 146, + 784, + 839 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Y. U. Alçalar and M. Akçakaya", + "bbox": [ + 271, + 114, + 482, + 128 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "35. Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., Hardt, M.: Test-time training with self-supervision for generalization under distribution shifts. In: International conference on machine learning. pp. 9229-9248. PMLR (2020)", + "36. Taam, W., Yandell, B.S.: Approximate Diagonalization of Spatial Covariance. University of Wisconsin, Department of Statistics (1987)", + "37. Tumanyan, N., Geyer, M., Bagon, S., Dekel, T.: Plug-and-play diffusion features for text-driven image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1921-1930 (2023)", + "38. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9446-9454 (2018)", + "39. Vincent, P.: A connection between score matching and denoising autoencoders. Neural computation 23(7), 1661-1674 (2011)", + "40. Wang, Y., Yu, J., Zhang, J.: Zero-shot image restoration using denoising diffusion null-space model. The Eleventh International Conference on Learning Representations (2023)", + "41. Yaman, B., Hosseini, S.A.H., Moeller, S., Ellermann, J., Ugurbil, K., Akçakaya, M.: Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 84(6), 3172-3191 (Dec 2020)", + "42. Yaman, B., Hosseini, S.A.H., Akçakaya, M.: Zero-shot self-supervised learning for MRI reconstruction. Proc ICLR (2021)", + "43. Yang, L., Ding, S., Cai, Y., Yu, J., Wang, J., Shi, Y.: Guidance with spherical gaussian constraint for conditional diffusion. In: International Conference on Machine Learning (2024)", + "44. Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R., Gool, L.V.: Denoising diffusion models for plug-and-play image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE) (2023)" + ], + "bbox": [ + 212, + 146, + 787, + 507 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Zero-Shot Approximate Posterior Sampling", + "bbox": [ + 441, + 114, + 730, + 128 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 16 + } +] \ No newline at end of file diff --git a/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_model.json b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3c27010f2860f54019fcca16a16e627e3ede1156 --- /dev/null +++ b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_model.json @@ -0,0 +1,2390 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.218, + 0.141, + 0.786, + 0.186 + ], + "angle": 0, + "content": "Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems" + }, + { + "type": "text", + "bbox": [ + 0.335, + 0.213, + 0.668, + 0.228 + ], + "angle": 0, + "content": "Yasar Utku Alçalar and Mehmet Akçakaya" + }, + { + "type": "text", + "bbox": [ + 0.377, + 0.24, + 0.627, + 0.268 + ], + "angle": 0, + "content": "University of Minnesota, Minneapolis {alcal029, akcakaya}@umn.edu" + }, + { + "type": "text", + "bbox": [ + 0.263, + 0.305, + 0.74, + 0.651 + ], + "angle": 0, + "content": "Abstract. Diffusion models have emerged as powerful generative techniques for solving inverse problems. Despite their success in a variety of inverse problems in imaging, these models require many steps to converge, leading to slow inference time. Recently, there has been a trend in diffusion models for employing sophisticated noise schedules that involve more frequent iterations of timesteps at lower noise levels, thereby improving image generation and convergence speed. However, application of these ideas for solving inverse problems with diffusion models remain challenging, as these noise schedules do not perform well when using empirical tuning for the forward model log-likelihood term weights. To tackle these challenges, we propose zero-shot approximate posterior sampling (ZAPS) that leverages connections to zero-shot physics-driven deep learning. ZAPS fixes the number of sampling steps, and uses zero-shot training with a physics-guided loss function to learn log-likelihood weights at each irregular timestep. We apply ZAPS to the recently proposed diffusion posterior sampling method as baseline, though ZAPS can also be used with other posterior sampling diffusion models. We further approximate the Hessian of the logarithm of the prior using a diagonalization approach with learnable diagonal entries for computational efficiency. These parameters are optimized over a fixed number of epochs with a given computational budget. Our results for various noisy inverse problems, including Gaussian and motion deblurring, inpainting, and super-resolution show that ZAPS reduces inference time, provides robustness to irregular noise schedules and improves reconstruction quality. Code is available at https://github.com/ualcalar17/ZAPS." + }, + { + "type": "text", + "bbox": [ + 0.263, + 0.664, + 0.74, + 0.692 + ], + "angle": 0, + "content": "Keywords: Diffusion Models \\(\\cdot\\) Zero-Shot Learning \\(\\cdot\\) Inverse Problems \\(\\cdot\\) Plug-and-Play (PnP) Methods \\(\\cdot\\) Unrolled Networks \\(\\cdot\\) Bayesian Methods" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.718, + 0.375, + 0.734 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.749, + 0.787, + 0.84 + ], + "angle": 0, + "content": "The forefront of deep generative models is now dominated by diffusion models [16, 28, 30, 32, 34] in the intricate task of image generation [11]. Their capabilities extend across various domains, including computer vision [2], natural language processing [17] and temporal data modeling [1]. Recently, diffusion models also showed great success in solving noiseless [5, 7, 33, 34] and noisy inverse problems [6, 21, 29, 31], owing to their capability to model complicated" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.484, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.145, + 0.495, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.145, + 0.785, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.304, + 0.495, + 0.453 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.304, + 0.784, + 0.453 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.472, + 0.788, + 0.501 + ], + "angle": 0, + "content": "Fig. 1: Representative results of our algorithm for four distinct noisy inverse problems \\((\\sigma = 0.05)\\), showing the ground truth (GT), measurement and reconstruction." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.53, + 0.785, + 0.56 + ], + "angle": 0, + "content": "high-dimensional distributions. Linear inverse problems utilize a known forward model given by" + }, + { + "type": "equation", + "bbox": [ + 0.448, + 0.562, + 0.553, + 0.576 + ], + "angle": 0, + "content": "\\[\n\\mathbf {y} = \\mathbf {A} \\mathbf {x} _ {0} + \\mathbf {n},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.584, + 0.788, + 0.711 + ], + "angle": 0, + "content": "and aim to deduce the underlying signal/image \\(\\mathbf{x}_0\\in \\mathbb{R}^n\\) from measurements \\(\\mathbf{y}\\in \\mathbb{R}^{m}\\), where \\(\\mathbf{n}\\in \\mathbb{R}^m\\) is measurement noise. In practical situations, the forward operator \\(\\mathbf{A}:\\mathbb{R}^n\\to \\mathbb{R}^m\\) is either incomplete or ill-conditioned, necessitating the use of prior information about the signal. Posterior sampling approaches use diffusion models as generative priors and incorporates information from both the data distribution and the forward physics model, allowing for sampling from the posterior distribution \\(p(\\mathbf{x}|\\mathbf{y})\\) using the given measurement \\(\\mathbf{y}\\) [21]. In this context, using Bayes' rule, \\(p(\\mathbf{x}|\\mathbf{y}) = \\frac{p(\\mathbf{x})p(\\mathbf{y}|\\mathbf{x})}{p(\\mathbf{y})}\\), the problem-specific score is" + }, + { + "type": "equation", + "bbox": [ + 0.333, + 0.722, + 0.786, + 0.739 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x} | \\mathbf {y}) = \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x}), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.785, + 0.81 + ], + "angle": 0, + "content": "where \\(\\nabla_{\\mathbf{x}_t}\\log p(\\mathbf{x})\\) is approximated via the learned score model \\(s_\\theta (\\mathbf{x}_t,t)\\). Many of these strategies utilize a plug-and-play (PnP) approach, using a pre-trained unconditional diffusion model as a prior [4, 9, 13, 18, 24, 37], and integrate the forward model during inference to address various inverse problem tasks." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.811, + 0.786, + 0.84 + ], + "angle": 0, + "content": "The complexity for these approaches arises in obtaining the latter forward model log-likelihood term in Eq. (1), which guides the diffusion to a target" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.733, + 0.131 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.299 + ], + "angle": 0, + "content": "class [11, 28]. While exact calculation is intractable, several approaches have been proposed to approximate this term. Among these, RED-diff [25] employs a variational sampler that uses a combination of measurement consistency loss and score matching regularization. Another technique, DSG [43], uses a spherical Gaussian constraint for denoising steps, allowing for larger step sizes. A class of methods utilize projections onto the convex measurement subspace after the unconditional update through score model [5, 8, 34]. Although these projections improve consistency between measurements and the sample, they are noted to lead to artifacts, such as boundary effects [7]. Thus, more recent approaches aimed to approximate the log-likelihood term in Eq. (1) different ways. Noting" + }, + { + "type": "equation", + "bbox": [ + 0.377, + 0.307, + 0.786, + 0.339 + ], + "angle": 0, + "content": "\\[\np _ {t} (\\mathbf {y} \\mid \\mathbf {x} _ {t}) = \\int_ {\\mathbf {x} _ {0}} p \\left(\\mathbf {x} _ {0} \\mid \\mathbf {x} _ {t}\\right) p \\left(\\mathbf {y} \\mid \\mathbf {x} _ {0}\\right) d \\mathbf {x} _ {0}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.348, + 0.787, + 0.381 + ], + "angle": 0, + "content": "DPS [6] uses the posterior mean \\(\\hat{\\mathbf{x}}_0 = \\hat{\\mathbf{x}}_0(\\mathbf{x}_t) \\triangleq \\mathbb{E}[\\mathbf{x}_0|\\mathbf{x}_t] = \\mathbb{E}_{\\mathbf{x}_0 \\sim p(\\mathbf{x}_0|\\mathbf{x}_t)}[\\mathbf{x}_0]\\), to approximate \\(p(\\mathbf{y}|\\mathbf{x}_t) = \\mathbb{E}_{\\mathbf{x}_0 \\sim p(\\mathbf{x}_0|\\mathbf{x}_t)}[p(\\mathbf{y}|\\mathbf{x}_0)]\\) as" + }, + { + "type": "equation", + "bbox": [ + 0.265, + 0.389, + 0.741, + 0.415 + ], + "angle": 0, + "content": "\\[\np (\\mathbf {y} | \\mathbf {x} _ {t}) = \\mathbb {E} _ {\\mathbf {x} _ {0} \\sim p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t})} [ p (\\mathbf {y} | \\mathbf {x} _ {0}) ] \\simeq p \\Big (\\mathbf {y} | \\mathbb {E} _ {\\mathbf {x} _ {0} \\sim p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t})} [ \\mathbf {x} _ {0} ] \\Big) = p (\\mathbf {y} | \\hat {\\mathbf {x}} _ {0}).\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.421, + 0.785, + 0.451 + ], + "angle": 0, + "content": "Another technique, IIGDM [31] approximates Eq. (2) as a Gaussian centered around \\(\\mathbf{A}\\hat{\\mathbf{x}}_0\\)" + }, + { + "type": "equation", + "bbox": [ + 0.323, + 0.459, + 0.786, + 0.491 + ], + "angle": 0, + "content": "\\[\n\\int_ {\\mathbf {x} _ {0}} p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t}) p (\\mathbf {y} | \\mathbf {x} _ {0}) \\mathbf {d} \\mathbf {x} _ {0} \\simeq \\mathcal {N} (\\mathbf {A} \\hat {\\mathbf {x}} _ {0}, r _ {t} ^ {2} \\mathbf {A} \\mathbf {A} ^ {\\top} + \\sigma_ {y} ^ {2} \\mathbf {I}), \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.498, + 0.786, + 0.529 + ], + "angle": 0, + "content": "and uses it for guidance. In these works, log-likelihood weights (or gradient step sizes), \\(\\{\\zeta_t\\}\\) are introduced to further control the reconstruction as" + }, + { + "type": "equation", + "bbox": [ + 0.331, + 0.538, + 0.786, + 0.555 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x} | \\mathbf {y}) = \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\zeta_ {t} \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x}). \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.562, + 0.788, + 0.789 + ], + "angle": 0, + "content": "While DPS demonstrates high performance in various inverse problem tasks, it suffers from the drawback of requiring a large number of sampling steps, resulting in prolonged reconstruction time. IIGDM accelerates this process by adopting regular (linear) jumps approach across the schedule. However, utilizing more complicated schedules, where the jumps are irregular introduces a challenge, as it requires distinct log-likelihood weights, \\(\\zeta_t\\), for each timestep. Heuristic adjustment of these weights is difficult and frequently leads to undesirable outcomes. In this work, by taking an inspiration from zero-shot/test-time self-supervised models [35,42] we propose to learn the log-likelihood weights for a fixed number of sampling steps and fine-tune them over a few epochs. It is crucial to note that fine-tuning DPS (or IIGDM) entails saving computational graphs for each unroll, leading to memory issues and slow backpropagation. Thus, we also propose to approximate the Hessian of the data probability using a wavelet-based diagonalization strategy [12], and learn these diagonal values for each timestep as well. Fig. 1 shows representative results for our method. Our key contributions include:" + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.795, + 0.788, + 0.842 + ], + "angle": 0, + "content": "- We introduce zero-shot approximate posterior sampling (ZAPS), leveraging zero-shot learning for dynamic automated hyperparameter tuning in the inference phase to improve solution of noisy inverse problems via diffusion" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.484, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.147, + 0.788, + 0.237 + ], + "angle": 0, + "content": "models. This method fortifies the robustness of the sampling process, attaining a state-of-the-art performance [6, 21, 31] in sampling outcomes. To the best of our knowledge, our method is the first attempt to learn the log-likelihood weights for solving inverse problems via diffusion models by using a measurement-consistent loss when the sampling noise schedule consists of irregular jumps across timesteps." + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.238, + 0.784, + 0.281 + ], + "angle": 0, + "content": "- We provide a well-designed approximation for the Hessian of the logarithm of the prior, enabling a computationally efficient and trainable posterior computation." + }, + { + "type": "text", + "bbox": [ + 0.228, + 0.283, + 0.785, + 0.372 + ], + "angle": 0, + "content": "- We showcase the efficacy of incorporating a learnable log-likelihood weights for each diffusion step during the reverse diffusion process through both quantitative and qualitative assessments on FFHQ and ImageNet datasets. Our approach not only outperforms state-of-the-art, but it also substantially reduces the required number of sampling steps from 1000 to \\(\\sim 20\\)-to-30, facilitating convergence with fewer total neural function evaluations (NFEs)." + }, + { + "type": "list", + "bbox": [ + 0.227, + 0.238, + 0.785, + 0.372 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.395, + 0.396, + 0.411 + ], + "angle": 0, + "content": "2 Related Works" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.426, + 0.784, + 0.535 + ], + "angle": 0, + "content": "Diffusion Models. During training, diffusion models [16, 34] add Gaussian noise to an image with a fixed increasing variance schedule, e.g. linear or exponential, \\(\\beta_{1},\\beta_{2},\\dots,\\beta_{T}\\) until pure noise is obtained, and learns a reverse diffusion process, where a neural network is trained to gradually remove noise and reconstruct the original image. Let \\(\\mathbf{x}_0\\sim p_{\\mathrm{data}}(x)\\) be samples from the data distribution, and \\(\\mathbf{x}_{\\{1:T\\}}\\in \\mathbb{R}^d\\) be noisy latent variables. By taking \\(\\alpha_{t} = 1 - \\beta_{t}\\) and \\(\\bar{\\alpha}_{t} = \\prod_{s = 1}^{t}\\alpha_{s}\\), the Markovian forward process can be written as" + }, + { + "type": "equation", + "bbox": [ + 0.377, + 0.546, + 0.785, + 0.561 + ], + "angle": 0, + "content": "\\[\nq \\left(\\mathbf {x} _ {t} \\mid \\mathbf {x} _ {0}\\right) = \\mathcal {N} \\left(\\mathbf {x} _ {t} \\mid \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0}, (1 - \\bar {\\alpha} _ {t}) \\mathbf {I}\\right). \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.216, + 0.572, + 0.738, + 0.587 + ], + "angle": 0, + "content": "By using the reparameterization trick and Eq. (5), \\(\\mathbf{x}_t\\) can be sampled as" + }, + { + "type": "equation", + "bbox": [ + 0.307, + 0.598, + 0.785, + 0.614 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} _ {t} \\left(\\mathbf {x} _ {0}, \\epsilon\\right) = \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0} + \\sqrt {1 - \\bar {\\alpha} _ {t}} \\epsilon \\quad \\text {w h e r e} \\quad \\epsilon \\sim \\mathcal {N} (\\epsilon ; 0, \\mathbf {I}). \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.624, + 0.784, + 0.655 + ], + "angle": 0, + "content": "Consequently, denoising diffusion probabilistic models (DDPMs) [16] learns the reverse process by minimizing a lower bound on the log prior via:" + }, + { + "type": "equation", + "bbox": [ + 0.369, + 0.665, + 0.785, + 0.682 + ], + "angle": 0, + "content": "\\[\nL _ {t} (\\theta) = \\mathbb {E} _ {t, \\mathbf {x} _ {0}, \\epsilon} \\| \\epsilon - \\epsilon_ {\\theta} \\left(\\mathbf {x} _ {t} \\left(\\mathbf {x} _ {0}, \\epsilon\\right), t\\right) \\| _ {2} ^ {2}. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.692, + 0.784, + 0.723 + ], + "angle": 0, + "content": "Furthermore, it can be shown that epsilon matching in Eq. (7) is analogous to the denoising score matching (DSM) [32,39] objective up to a constant:" + }, + { + "type": "equation", + "bbox": [ + 0.35, + 0.734, + 0.785, + 0.755 + ], + "angle": 0, + "content": "\\[\n\\min _ {\\theta} \\mathbb {E} _ {\\mathbf {x} _ {t}, \\mathbf {x} _ {0}, \\epsilon} \\| \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t) - \\nabla_ {\\mathbf {x} _ {t}} \\log q (\\mathbf {x} _ {t} | \\mathbf {x} _ {0}) \\| _ {2} ^ {2}, \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.768, + 0.784, + 0.803 + ], + "angle": 0, + "content": "in which \\(\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t) = -\\frac{\\epsilon_{\\theta}(\\mathbf{x}_t,t)}{\\sqrt{1 - \\bar{\\alpha}_t}}\\). Using Tweedie's formula and Eq. (6), posterior mean for \\(p(\\mathbf{x}_0|\\mathbf{x}_t)\\) can be found as:" + }, + { + "type": "equation", + "bbox": [ + 0.379, + 0.814, + 0.784, + 0.843 + ], + "angle": 0, + "content": "\\[\n\\hat {\\mathbf {x}} _ {0} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {x} _ {t} + (1 - \\bar {\\alpha} _ {t}) \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t)\\right). \\tag {9}\n\\]" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.147, + 0.784, + 0.178 + ], + "angle": 0, + "content": "Sampling \\(\\mathbf{x}_{t + 1}\\) from \\(p(\\mathbf{x}_{t + 1}|\\mathbf{x}_t)\\) can be done using ancestral sampling by iteratively computing:" + }, + { + "type": "equation", + "bbox": [ + 0.335, + 0.187, + 0.786, + 0.219 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} _ {t - 1} = \\frac {1}{\\sqrt {\\alpha_ {t}}} \\left(\\mathbf {x} _ {t - 1} - \\frac {1 - \\alpha_ {t}}{\\sqrt {1 - \\bar {\\alpha} _ {t}}} \\boldsymbol {\\epsilon} _ {\\theta} (\\mathbf {x} _ {t}, t)\\right) + \\sigma_ {t} \\mathbf {z}, \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.228, + 0.784, + 0.276 + ], + "angle": 0, + "content": "where \\(\\mathbf{z} \\sim \\mathcal{N}(0, \\mathbf{I})\\) and \\(\\sigma_t^2 = \\tilde{\\beta}_t = \\frac{1 - \\bar{\\alpha}_{t-1}}{1 - \\bar{\\alpha}_t} \\beta_t\\). It is also worth noting that the DDPM is equivalent to the variance preserving stochastic differential equations (VP-SDEs) [34]." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.295, + 0.785, + 0.416 + ], + "angle": 0, + "content": "Solving Inverse Problems via Diffusion Models. When solving inverse problems via diffusion models, the main challenge is to find an approximation to the log-likelihood term, \\(\\nabla_{\\mathbf{x}_t}\\log p(\\mathbf{y}|\\mathbf{x})\\), as discussed earlier. One recent method, denoising diffusion restoration models (DDRM) [21], utilizes a spectral domain approach, allowing the incorporation of noise from the measurement domain into the spectral domain through singular value decomposition (SVD). However, the application of SVD is computationally expensive [6]. Manifold Constrained Gradient (MCG) [7] method applies projections after the MCG correction as:" + }, + { + "type": "equation", + "bbox": [ + 0.259, + 0.424, + 0.785, + 0.444 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} _ {t - 1} ^ {\\prime} = f (\\mathbf {x} _ {t}, \\mathbf {s} _ {\\theta}) - \\zeta \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {K} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\| _ {2} ^ {2} + g (\\mathbf {x} _ {t}) \\mathbf {z}, \\quad \\mathbf {z} \\sim \\mathcal {N} (0, \\mathbf {I}), \\tag {11}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.258, + 0.446, + 0.785, + 0.461 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} _ {t - 1} = \\mathbf {H} \\mathbf {x} _ {t - 1} + \\mathbf {b}, \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.47, + 0.784, + 0.516 + ], + "angle": 0, + "content": "where \\(\\zeta\\) and \\(\\mathbf{H}\\) are dependent on noise covariance. MCG update of Eq. (11) projects estimates onto the measurement subspace, thus they may fall off from the data manifold [6]. Hence, DPS proposes to update without projections as:" + }, + { + "type": "equation", + "bbox": [ + 0.384, + 0.526, + 0.785, + 0.543 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} _ {t - 1} = \\mathbf {x} _ {t - 1} ^ {\\prime} - \\zeta_ {t} \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0} \\| _ {2} ^ {2}, \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.552, + 0.784, + 0.582 + ], + "angle": 0, + "content": "Note Eq. (13) is equivalent to Eq. (11) when \\(\\mathbf{K} = \\mathbf{I}\\), and it reduces to the following when the forward operator is linear:" + }, + { + "type": "equation", + "bbox": [ + 0.377, + 0.59, + 0.785, + 0.62 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} _ {t - 1} = \\mathbf {x} _ {t - 1} ^ {\\prime} + \\zeta_ {t} \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.628, + 0.784, + 0.659 + ], + "angle": 0, + "content": "IIGDM [31], on the other hand, utilizes a Gaussian centered around \\(\\hat{\\mathbf{x}}_0\\) that is defined in Eq. (9) to obtain the following score approximation:" + }, + { + "type": "equation", + "bbox": [ + 0.304, + 0.668, + 0.785, + 0.698 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} \\left(r _ {t} ^ {2} \\mathbf {A} \\mathbf {A} ^ {\\top} + \\sigma_ {y} ^ {2} \\mathbf {I}\\right) ^ {- 1} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}). \\tag {15}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.706, + 0.772, + 0.723 + ], + "angle": 0, + "content": "In cases where there is no measurement noise \\((\\sigma_y = 0)\\), Eq. (15) simplifies to:" + }, + { + "type": "equation", + "bbox": [ + 0.36, + 0.731, + 0.785, + 0.761 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq r _ {t} ^ {- 2} \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\dagger} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\tag {16}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.77, + 0.784, + 0.801 + ], + "angle": 0, + "content": "where \\(\\mathbf{A}^{\\dagger}\\) denotes the Moore-Penrose pseudoinverse of \\(\\mathbf{A}\\). We note that using Woodbury matrix identity (derived in SuppMat), one can simplify Eq. (15) to:" + }, + { + "type": "equation", + "bbox": [ + 0.24, + 0.81, + 0.784, + 0.843 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\left(\\mathbf {A} ^ {\\top} \\mathbf {A} + \\eta \\mathbf {I}\\right) ^ {- 1} \\mathbf {A} ^ {\\top} \\left(\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}\\right), \\quad \\text {w h e r e} \\eta = \\frac {\\sigma_ {y} ^ {2}}{r _ {t} ^ {2}}. \\tag {17}\n\\]" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.484, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.222 + ], + "angle": 0, + "content": "From Eq. (17), the similarity between DPS and IIGDM updates can be seen, with \\((\\mathbf{A}^{\\top}\\mathbf{A} + \\eta \\mathbf{I})^{-1}\\) term being the difference. Note the DPS update in Eq. (13) works with non-linear operators, while IIGDM's update does not rely on the differentiability of the forward operator, as long as a pseudo-inverse-like operation can be derived." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.245, + 0.784, + 0.35 + ], + "angle": 0, + "content": "Improved Irregular Noise Schedules for Image Generation. Diffusion models typically utilize well-defined fixed noise schedules, with examples including linear or exponential ones. Lately, more sophisticated methods have been developed that sweep across these schedules and take samples in irregular timesteps [11,19] for unconditional image generation. The idea behind this strategy hinges on more frequent sampling for lower noise levels, making it possible to use considerably less number of sampling steps." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.352, + 0.788, + 0.609 + ], + "angle": 0, + "content": "Most of the aforementioned studies that solve inverse problems via diffusion models used the same number of steps that the unconditional diffusion model was trained for [6,7,34]. Nonetheless, there has been a notable trend favoring shorter schedules characterized by linear jumps for inverse problems, where the log-likelihood weights were hand-tuned by trial-and-error [25,31] when using reduced number of steps. While these approaches have proven effective, they still require a large number of sampling steps or heuristic tuning of the log-likelihood weights, \\(\\{\\zeta_t\\}\\) in Eq. (4) to achieve good performance. The former issue leads to lengthy and potentially impractical computational times, while the latter issue results in generalizability difficulties for adoption at different measurement noise levels and variations in the measurement operators. Furthermore, the irregular jump strategy that has been powerful for image generation has not garnered significant attention for inverse problems, mainly due to the impracticality of empirically tuning the log-likelihood weights. Thus, a method that automatically selects and adjusts log-likelihood weights based on the provided measurements for arbitrary noise schedules, instead of requiring manual tuning, holds significant potential for improving robustness and image quality." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.632, + 0.381, + 0.65 + ], + "angle": 0, + "content": "3 Methodology" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.664, + 0.669, + 0.68 + ], + "angle": 0, + "content": "3.1 Zero-shot Fine Tuning of Log-Likelihood Weights" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.69, + 0.788, + 0.797 + ], + "angle": 0, + "content": "In this work, we propose a robust automated approach for setting the log-likelihood weights at each timestep for arbitrary noise sampling schedules to improve posterior sampling with the given measurements during inference. This allows for a stable reconstruction for different sweeps across noise schedules. Furthermore, the weights themselves are image-specific, which improves the performance compared to the former approaches. For estimating the likelihood in Eq. (1), we use the update in DPS [6]:" + }, + { + "type": "equation", + "bbox": [ + 0.297, + 0.808, + 0.786, + 0.838 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0} \\| _ {2} ^ {2} = - \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}), \\tag {18}\n\\]" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "image", + "bbox": [ + 0.234, + 0.143, + 0.773, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.359, + 0.788, + 0.471 + ], + "angle": 0, + "content": "Fig. 2: Our zero-shot approximate posterior sampling (ZAPS) approach unrolls the sampling process for a fixed number of \\( S \\) steps for arbitrary/irregular noise schedules, alternating between score model sampling (SMS) and likelihood guidance (LG). Our zero-shot fine-tuning approach has two key components: 1) The Hessian of the log prior is approximated using a discrete wavelet transform diagonalization technique, 2) Both the diagonal matrices, \\( \\{\\mathbf{D}_t\\} \\) and the log-likelihood weights, \\( \\{\\zeta_t\\} \\) are updated during fine-tuning. The fine-tuning is done for a fixed number of epochs with a given NFE budget, yielding a faster and more robust adaptive inverse problem solver." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.5, + 0.788, + 0.561 + ], + "angle": 0, + "content": "although as noted before, the IIGDM [31] update in Eq. (17) is also similar. Thus we emphasize that while we chose DPS as baseline for its versatility in inverse problems, our ZAPS strategy is applicable to other diffusion models for inverse problems. Recalling the definition of \\(\\hat{\\mathbf{x}}_0\\) in Eq. (9), we note" + }, + { + "type": "equation", + "bbox": [ + 0.361, + 0.572, + 0.787, + 0.606 + ], + "angle": 0, + "content": "\\[\n\\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + (1 - \\bar {\\alpha} _ {t}) \\frac {\\partial \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t} , t)}{\\partial \\mathbf {x} _ {t}}\\right). \\tag {19}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.617, + 0.788, + 0.651 + ], + "angle": 0, + "content": "Thus, ignoring the calculation and storage of the matrix \\(\\frac{\\partial\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)}{\\partial\\mathbf{x}_t}\\) for now, one needs to fine tune the log-likelihood weights \\(\\{\\zeta_t\\}\\) in" + }, + { + "type": "equation", + "bbox": [ + 0.279, + 0.66, + 0.787, + 0.695 + ], + "angle": 0, + "content": "\\[\n\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\zeta_ {t} \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + (1 - \\bar {\\alpha} _ {t}) \\frac {\\partial \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t} , t)}{\\partial \\mathbf {x} _ {t}}\\right) \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}). \\qquad (2 0)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.704, + 0.789, + 0.842 + ], + "angle": 0, + "content": "This is done based on the concept of algorithm unrolling [14, 15, 22] in physics-driven deep learning by fixing the number of sampling steps \\( T \\). Then the whole posterior sampling process is described as alternating between DDPM sampling using the pre-trained unconditional score model, followed by the log-likelihood term guidance in Eq. (20) for \\( T \\) steps. This \"unrolled\" network is fine-tuned end-to-end, where the only updates are made to \\( \\{\\zeta_t\\} \\) and no fine-tuning is performed on the unconditional score function, \\( \\mathbf{s}_{\\theta}(\\mathbf{x}_t,t) \\). This also alleviates the need for backpropagation across the score function network, leading to further savings in computational time. The fine-tuning is performed using a physics-inspired loss" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.484, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "code_caption", + "bbox": [ + 0.218, + 0.146, + 0.69, + 0.163 + ], + "angle": 0, + "content": "Algorithm 1 ZAPS: Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "algorithm", + "bbox": [ + 0.217, + 0.165, + 0.785, + 0.414 + ], + "angle": 0, + "content": "Require: \\(T,\\mathbf{y},\\{\\tilde{\\sigma}_i\\}_{i = 1}^T\\) orthogonal DWT (W) \n1: \\(\\mathbf{x}_T\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})\\) \n2: \\(\\tau \\subset [1,\\dots,T]\\) extending over a length of \\(S < T\\) \n3: for epoch in range(epochs) do \n4: for \\(i = S,\\ldots ,1\\) do \n5: \\(\\hat{\\mathbf{s}}\\gets \\mathbf{s}_{\\theta}(\\mathbf{x}_{\\tau_i},\\tau_i)\\) ▷ Score computation \n6: \\(\\hat{\\mathbf{x}}_0\\leftarrow \\frac{1}{\\sqrt{\\bar{\\alpha}_{\\tau_i}}} (\\mathbf{x}_{\\tau_i} + (1 - \\bar{\\alpha}_{\\tau_i})\\hat{\\mathbf{s}})\\) Tweedie denoising \n7: \\(\\mathbf{z}\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})\\) if \\(\\tau_{i} > 1\\) , else \\(\\mathbf{z} = \\mathbf{0}\\) \n8: \\(\\mathbf{x}_{\\tau_i - 1}'\\gets \\frac{\\sqrt{\\alpha_{\\tau_i}}(1 - \\bar{\\alpha}_{\\tau_i - 1})}{1 - \\bar{\\alpha}_{\\tau_i}}\\mathbf{x}_{\\tau_i} + \\frac{\\sqrt{\\bar{\\alpha}_{\\tau_i - 1}}\\beta_{\\tau_i}}{1 - \\bar{\\alpha}_{\\tau_i}}\\hat{\\mathbf{x}}_0 + \\tilde{\\sigma}_{\\tau_i}\\mathbf{z}\\) \n9: \\(\\mathbf{x}_{\\tau_{i - 1}}\\gets \\mathbf{x}_{\\tau_{i - 1}}' + \\zeta_{\\tau_i}\\left(\\left(\\frac{1}{\\sqrt{\\bar{\\alpha}_{\\tau_i}}}\\Bigl {(}\\mathbf{I} + (1 - \\bar{\\alpha}_{\\tau_i})\\mathbf{WD}_{\\tau_i}\\mathbf{W}^\\top \\Bigr)\\right)\\cdot \\mathbf{A}^\\top (\\mathbf{y} - \\mathbf{A}\\hat{\\mathbf{x}}_0)\\right)\\) \n10: end for \n11: Update network parameters \\(\\{\\zeta_t\\}\\) and \\(\\{\\mathbf{D}_t\\}\\) \n12: end for \n13: return \\({\\bf x}_0\\)" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.443, + 0.788, + 0.489 + ], + "angle": 0, + "content": "function that evaluates the consistency of the final estimate and the measurements: \\(\\mathcal{L}(\\mathbf{y},\\mathbf{x}_0) = ||\\mathbf{y} - \\mathbf{A}\\mathbf{x}_0||_2^2\\). High-level explanation for our algorithm is given in Fig. 2." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.512, + 0.658, + 0.529 + ], + "angle": 0, + "content": "3.2 Approximation for the Hessian of the Log Prior" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.538, + 0.789, + 0.721 + ], + "angle": 0, + "content": "Implementing the zero-shot update for Eq. (20) poses various challenges, since backpropagation through the unrolled network to update all \\(\\{\\zeta_t\\}\\) requires another backpropagation through the Jacobian of the score function at each time step. This can only be done by retaining the computational graphs that are created when calculating the Jacobian term in Eq. (20), which quickly explodes memory requirements, especially when the number of sampling steps increases. Also, backpropagating through multiple graphs at the end to only update the log-likelihood weights is time-inefficient and causes prolonged sampling times. Hence, we propose to approximate the Jacobian using inspirations from wavelet-based signal processing techniques and propose to learn this approximation to improve the overall outcome. Noting that \\(\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)\\) in Eq. (19) is an approximation of the log-gradient of the true prior \\(p(\\mathbf{x})\\), we have" + }, + { + "type": "equation", + "bbox": [ + 0.351, + 0.73, + 0.787, + 0.767 + ], + "angle": 0, + "content": "\\[\n\\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + \\left(1 - \\bar {\\alpha} _ {t}\\right) \\frac {\\partial^ {2} \\log p _ {t} (\\mathbf {x} _ {t})}{\\partial \\mathbf {x} _ {t} ^ {2}}\\right). \\tag {21}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.775, + 0.789, + 0.843 + ], + "angle": 0, + "content": "In order to make a backpropagation to update these weights, one needs to calculate the Hessian matrix, \\(\\frac{\\partial^2\\log p_t(\\mathbf{x}_t)}{\\partial\\mathbf{x}_t^2}\\) given in Eq. (21). This matrix is the negative of the observed Fisher information matrix, whose expected value is the Fisher information matrix. It is also known that in the limit, it approximates" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.733, + 0.131 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.335 + ], + "angle": 0, + "content": "the inverse covariance matrix of the maximum likelihood estimator. Furthermore, under mild assumptions about continuity of the prior, the observed Fisher information matrix is symmetric. Thus, an appropriate decorrelating unitary matrix can be used to diagonalize it. While finding the desired unitary matrix is equally time-consuming as calculating this Hessian, several pre-determined unitary transforms have been proposed for decorrelation in the signal processing community for different applications [12, 27, 36]. Of particular note is the use of unitary wavelet transforms for Wiener filtering [12], where these transforms were utilized for their tendency to decorrelate data, i.e. approximate the Karhunen-Loeve transform [27]. In this work, we also use these decorrelating properties to approximately diagonalize the Hessian of the log prior, \\(\\frac{\\partial^2\\log p_t(\\mathbf{x}_t)}{\\partial\\mathbf{x}_t^2}\\) using fixed orthogonal discrete wavelet transforms (DWT):" + }, + { + "type": "equation", + "bbox": [ + 0.41, + 0.347, + 0.786, + 0.381 + ], + "angle": 0, + "content": "\\[\n\\frac {\\partial^ {2} \\log p _ {t} (\\mathbf {x} _ {t})}{\\partial \\mathbf {x} _ {t} ^ {2}} \\simeq \\mathbf {W D} _ {t} \\mathbf {W} ^ {\\top}, \\tag {22}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.39, + 0.788, + 0.452 + ], + "angle": 0, + "content": "where \\(\\mathbf{W}\\) is an orthogonal DWT. By making this approximation, backpropagation through the score model can also be avoided, and only the diagonal values in distinct \\(\\{\\mathbf{D}_t\\}\\) matrices needs to be learned. Our final algorithm to sample from pure noise with fine-tuning is given in Algorithm 1." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.475, + 0.357, + 0.491 + ], + "angle": 0, + "content": "4 Evaluation" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.508, + 0.664, + 0.525 + ], + "angle": 0, + "content": "4.1 Experimental Setup and Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.534, + 0.788, + 0.761 + ], + "angle": 0, + "content": "We comprehensively evaluated our method, examining its performance through both qualitative and quantitative analyses using FFHQ [20] and ImageNet [10] datasets with size \\( 256 \\times 256 \\times 3 \\). Pre-trained unconditional diffusion models trained on FFHQ and ImageNet were taken from [5] and [11] respectively, and used without retraining. For our experiments, we sampled 1000 images from FFHQ and ImageNet validation sets. All images underwent pre-processing to be normalized in the range [0, 1]. During all the evaluations, a Gaussian measurement noise with \\( \\sigma = 0.05 \\) was used. For the orthogonal DWT, Daubechies 4 wavelet was utilized. For our quantitative evaluations, we employed 30 sampling steps with a schedule of \"15,10,5\", and 10 epochs for fine-tuning, resulting in a total of 300 NFEs. As noted in [11], superior schedules may exist but it requires substantial computational time to try out all possible schedules. Thus, we opted a schedule that is simple, and samples more frequently at the lower noise levels [11]. More details about the network architectures and hyperparameter choices are given in SuppMat." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.784, + 0.603, + 0.8 + ], + "angle": 0, + "content": "4.2 Experiments on Linear Inverse Problems" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.842 + ], + "angle": 0, + "content": "Problem Setup. We focused on the following linear inverse problems: (1) Gaussian deblurring, (2) inpainting, (3) motion deblurring, (4) super-resolution. For" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.483, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.145, + 0.528, + 0.345 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.532, + 0.145, + 0.785, + 0.346 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.351, + 0.788, + 0.394 + ], + "angle": 0, + "content": "Fig. 3: Representative images using various methods for solving Gaussian deblurring, motion deblurring and super-resolution \\((\\times 4)\\) tasks. Proposed method qualitatively improves upon each method, including the baseline state-of-the-art DPS." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.424, + 0.788, + 0.531 + ], + "angle": 0, + "content": "Gaussian deblurring, we considered a kernel of size \\(61 \\times 61\\) with a standard deviation \\(\\sigma = 3.0\\). For inpainting, we considered two different scenarios wherein we randomly masked out \\(70\\%\\) and a \\(128 \\times 128\\) box region of the image, applied uniformly across all three channels. For motion blur, we generated the blur kernel via the code1, with \\(61 \\times 61\\) kernel size and 0.5 intensity, as in [6]. Finally, for super-resolution, we considered bicubic downsampling. All measurements are obtained through applying the forward model to the ground truth image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.545, + 0.788, + 0.667 + ], + "angle": 0, + "content": "Comparison Methods. We compared our method with score-SDE [5, 8, 34], manifold constrained gradients (MCG) [7], denoising diffusion restoration models (DDRM) [21], diffusion posterior sampling (DPS) [6] and pseudo-inverse guided diffusion models (IIGDM) [31]. We note that our implementation of score-SDE follows the same strategy as presented in [6]. We referred to the methods that iteratively applied projections onto convex sets (POCS) as score-SDE. Additional comparisons to DDNM [40] and DiffPIR [44] are also provided in SuppMat. All methods were implemented using their respective public repositories." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.681, + 0.788, + 0.786 + ], + "angle": 0, + "content": "Quantitative and Qualitative Results. We evaluated our method quantitatively using learned perceptual image patch similarity (LPIPS) distance, structural similarity index (SSIM), and peak signal-to-noise-ratio (PSNR). Representative results in Fig. 3 show that DDRM yields blurry results in Gaussian deblurring task. DPS improves sharpness across these distinct inverse problem tasks, while ZAPS yields comparable sharpness while exhibiting a higher similarity to the ground truth, all within a third of the total NFEs." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.787, + 0.788, + 0.817 + ], + "angle": 0, + "content": "Representative inpainting results in Fig. 4 show that ZAPS substantially improves upon DDRM, a method that uses a slightly lower 20 timesteps, and" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.824, + 0.568, + 0.841 + ], + "angle": 0, + "content": "1 https://github.com/LeviBorodenko/motionblur" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.145, + 0.784, + 0.335 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.347, + 0.788, + 0.389 + ], + "angle": 0, + "content": "Fig. 4: Illustrative images using state-of-the-art methods for random (70%) and box \\((128 \\times 128)\\) inpainting. Proposed method improves upon DDRM, while achieving similar performance to IIGDM and DPS, with subtle improvements shown in zoomed insets." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.422, + 0.784, + 0.513 + ], + "angle": 0, + "content": "achieves better similarity to the ground truth and sharpness compared to DPS, which uses almost \\(33 \\times\\) more steps. Similarly, when compared with IIIGDM, it is evident that our method gives comparable results even though \\(3 - 4 \\times\\) fewer number of steps are used. The zoomed insets highlight subtle improvements afforded by our method compared to state-of-the-art DPS and IIIGDM, as seen around the eyes." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.514, + 0.785, + 0.621 + ], + "angle": 0, + "content": "Tab. 1 and Tab. 2 show the three quantitative metrics for all methods, while Tab. 3 illustrates their computational complexity. ZAPS outperforms Score-SDE, MCG, and our baseline state-of-the-art comparison, DPS, in computational complexity and quantitative performance, yielding faster and improved reconstructions. Although DDRM and IIGDM surpass ZAPS in terms of computational complexity, ZAPS outperforms both methods quantitatively in terms of all three metrics. Furthermore, IIGDM could not be implemented reliably for several lin" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.65, + 0.788, + 0.693 + ], + "angle": 0, + "content": "Table 1: Quantitative results for Gaussian deblurring and random inpainting (70%) on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task." + }, + { + "type": "table", + "bbox": [ + 0.217, + 0.697, + 0.787, + 0.839 + ], + "angle": 0, + "content": "
MethodGaussian DeblurringRandom Inpainting
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1280.71825.200.1040.81128.03
MCG [7]0.5580.50915.120.1450.75425.33
IIGDM [31]---0.0860.84226.62
DDRM [21]0.1830.70224.420.1980.74125.17
Score-SDE [5,8,34]0.5710.49615.170.2240.71824.44
ZAPS (Ours)0.1210.75726.060.0780.81327.79
" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.483, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.145, + 0.787, + 0.187 + ], + "angle": 0, + "content": "Table 2: Quantitative results for motion deblurring and super-resolution \\((\\times 4)\\) on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.192, + 0.787, + 0.334 + ], + "angle": 0, + "content": "
MethodMotion DeblurringSuper-Resolution (×4)
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1430.70424.030.1680.71923.86
MCG [7]0.5650.49715.100.2290.62320.74
IIGDM [31]---0.1310.76024.48
DDRM [21]---0.1750.71124.55
Score-SDE [5,8,34]0.5460.48815.020.2570.60919.13
ZAPS (Ours)0.1410.70924.160.1040.76826.63
" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.36, + 0.787, + 0.42 + ], + "angle": 0, + "content": "ear inverse problems related to deblurring. We also note that the parameters in ZAPS are adaptive, meaning one can reach the same computational complexity by adjusting total epochs or steps, in trade-off for a slight decrease in performance, as studied in Sec. 4.3." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.441, + 0.401, + 0.455 + ], + "angle": 0, + "content": "4.3 Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.465, + 0.787, + 0.706 + ], + "angle": 0, + "content": "We conducted three distinct ablation studies to investigate critical aspects of our algorithm's performance. The first ablation study compared combinations of different timesteps and epochs with a fixed NFE budget, providing a nuanced exploration into the influence of specific combinations on the model's behavior. Specifically, we explored the reconstruction capabilities of the model qualitatively and quantitatively by varying the length of model timesteps, \\( S \\in \\{20, 30, 60\\} \\). For a fixed NFE budget of 300, these corresponded to 15, 10 and 5 epochs for zero-shot fine-tuning respectively. Fig. 5a shows the final estimates, while Fig. 5b and Fig. 5c depict the corresponding loss and PSNR curves for each combination (Further quantitative results are in SuppMat). Notably, all the estimates are similar, though sharpness improves slightly as \\( S \\) increases. However, the trade-off for choosing a high \\( S \\) is the low number of epochs. Especially for cases, where the measurement system or noise level changes, this makes fine-tuning susceptible to initialization of the hyperparameters as it is more difficult to converge to a good solution in \\( \\sim 5 \\) epochs. Thus, for improved generalizability and robustness, we opted to use \\( S = 30 \\) and 10 epochs for our database testing." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.707, + 0.787, + 0.736 + ], + "angle": 0, + "content": "Our second ablation study analyzed the performance of ZAPS with respect to other state-of-the-art methods when all methods used the same NFE. We" + }, + { + "type": "table_caption", + "bbox": [ + 0.218, + 0.759, + 0.785, + 0.774 + ], + "angle": 0, + "content": "Table 3: Computational costs of methods in terms of NFEs and wall-clock time (WCT)" + }, + { + "type": "table", + "bbox": [ + 0.223, + 0.779, + 0.782, + 0.839 + ], + "angle": 0, + "content": "
DPS [6]MCG [7]IIGDM [31]DDRM [21]Score-SDE [34]ZAPS
Total NFEs10001000100201000300
WCT (s)47.2548.834.532.1223.4714.71
" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "image", + "bbox": [ + 0.241, + 0.147, + 0.347, + 0.239 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.352, + 0.147, + 0.455, + 0.239 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.458, + 0.147, + 0.558, + 0.239 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.561, + 0.147, + 0.659, + 0.239 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.661, + 0.147, + 0.763, + 0.239 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.24, + 0.241, + 0.764, + 0.264 + ], + "angle": 0, + "content": "(a) Re constructions using ZAPS for super-resolution \\((\\times 4)\\) task with different total timesteps-epochs combinations for the same \\(\\mathrm{NFE} = 300\\)" + }, + { + "type": "image", + "bbox": [ + 0.264, + 0.265, + 0.489, + 0.398 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.266, + 0.402, + 0.485, + 0.414 + ], + "angle": 0, + "content": "(b) Loss graphs for each combination." + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.266, + 0.741, + 0.398 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.402, + 0.741, + 0.414 + ], + "angle": 0, + "content": "(c) PSNR graphs for each combination." + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.426, + 0.788, + 0.468 + ], + "angle": 0, + "content": "Fig. 5: Study on different epochs and sampling steps combinations with fixed NFE. Results show similar quality for combinations with lower timestep approaches staring from higher loss/lower PSNR but converging to similar values." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.499, + 0.789, + 0.681 + ], + "angle": 0, + "content": "investigated total NFEs of 100, 300, and 500 to demonstrate the robustness of our approach, given its adaptable parameters, as previously discussed. For 100 NFEs, we applied 20 steps (schedule = \"10,7,3\") with 5 epochs, whereas for 300 and 500 NFEs, we applied 30 steps (schedule = \"15,10,5\") and 50 steps (schedule = \"30,15,5\"), respectively, for 10 epochs. Additionally, we also implemented ZAPS with uniformly spaced noise schedules to highlight the benefits of the proposed irregular noise schedules. As seen in Tabs. 4 and 5, ZAPS with irregular noise schedules outperforms the state-of-the-art methods for NFE budgets of 100, 300 and 500 in super-resolution and random inpainting tasks. We note that we could not perform this test for deblurring experiments as IIGDM could not be implemented reliably across the database, as previously mentioned. We also note that the difference between irregular and uniform noise schedules for ZAPS is" + }, + { + "type": "table_caption", + "bbox": [ + 0.215, + 0.708, + 0.788, + 0.737 + ], + "angle": 0, + "content": "Table 4: Quantitative results for super-resolution \\((\\times 4, \\sigma = 0.05)\\) on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.748, + 0.785, + 0.839 + ], + "angle": 0, + "content": "
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.3440.47816.960.2570.57720.010.2180.62321.52
IIGDM [31]0.1310.76024.480.1170.75824.800.1230.76224.25
ZAPS (Uniform)0.1080.74925.920.1190.72926.290.1150.75625.63
ZAPS (Irregular)0.1060.74126.080.1040.76826.630.0950.77026.26
" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.483, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.145, + 0.785, + 0.174 + ], + "angle": 0, + "content": "Table 5: Quantitative results for random inpainting (70%, σ = 0.05) on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.185, + 0.784, + 0.276 + ], + "angle": 0, + "content": "
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.2680.59320.010.1890.70423.740.1520.75425.59
IIGDM [31]0.0860.84226.620.0800.84925.060.0820.84524.94
ZAPS (Uniform)0.1220.78026.200.1270.77325.870.0800.79126.94
ZAPS (Irregular)0.0850.79427.030.0780.81327.790.0710.81828.11
" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.3, + 0.784, + 0.329 + ], + "angle": 0, + "content": "less pronounced for 100 NFEs, but the advantage of irregular schedules becomes apparent for 300 and 500 NFEs." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.33, + 0.784, + 0.36 + ], + "angle": 0, + "content": "The final ablation study, exploring the benefits of using distinct weights \\(\\zeta_t\\) for each timestep versus a shared weight \\(\\zeta\\) for every step, is provided in SuppMat." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.379, + 0.356, + 0.392 + ], + "angle": 0, + "content": "4.4 Limitations" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.4, + 0.787, + 0.596 + ], + "angle": 0, + "content": "The loss function we use, \\(\\mathcal{L}(\\mathbf{y},\\mathbf{x}_0) = ||\\mathbf{y} - \\mathbf{A}\\mathbf{x}_0||_2^2\\), resembles a deep image prior-like loss [38]. However, note that there is a subtle difference in our context, where it corresponds to the log-likelihood of \\(p(\\mathbf{y}|\\mathbf{x}_0)\\), which is different then the (approximate) log-likelihood guidance term \\(p(\\mathbf{y}|\\mathbf{x}_t)\\) used at each time-step. This allows for more robustness to overfitting that is typically observed in DIP-type methods. Further overfitting avoidance measures can be taken by data-splitting [3, 23, 26, 41, 42], though this was not necessary for the small number of epochs used for fine-tuning. Additionally, while our approximation in Eq. (22) produces competitive results, it is important to keep in mind that wavelets may not fully decorrelate the observed Fisher information matrix. Finally, we note that while we chose DPS as a baseline for its versatility in inverse problem tasks, the adaptive weighting strategy in ZAPS, as well as our Hessian approximation, are applicable to other posterior sampling diffusion models for inverse problems." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.616, + 0.357, + 0.631 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.644, + 0.787, + 0.84 + ], + "angle": 0, + "content": "In this work, we proposed a novel approach named zero-shot approximate posterior sampling (ZAPS), which harnesses zero-shot learning for dynamic automated hyperparameter tuning during the inference phase to enhance the reconstruction quality of solving linear noisy inverse problems using diffusion models. In particular, learning the log-likelihood weights facilitates the usage of more complex and irregular noise schedules, whose feasibility for inverse problems was shown, to the best of our knowledge, for the first time in this paper. These irregular noise schedules enabled high quality reconstructions with \\(20 - 50 \\times\\) fewer timesteps. When number of epochs for fine-tuning is also considered, our approach results in a speed boost of approximately \\(3 \\times\\) compared to state-of-the-art methods like DPS. Quantitative and qualitative evaluations on natural images illustrate our method's ability to attain state-of-the-art performance across diverse inverse problem tasks." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.732, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.403, + 0.163 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.177, + 0.786, + 0.192 + ], + "angle": 0, + "content": "This work was partially supported by NIH R01HL153146 and NIH R01EB032830." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.215, + 0.323, + 0.23 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.246, + 0.785, + 0.274 + ], + "angle": 0, + "content": "1. Alcaraz, J.M.L., Strodthoff, N.: Diffusion-based time series imputation and forecasting with structured state space models. arXiv preprint arXiv:2208.09399 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.275, + 0.785, + 0.315 + ], + "angle": 0, + "content": "2. Baranchuk, D., Rubachev, I., Voynov, A., Khrulkov, V., Babenko, A.: Label-efficient semantic segmentation with diffusion models. International Conference on Learning Representations (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.316, + 0.785, + 0.343 + ], + "angle": 0, + "content": "3. Batson, J., Royer, L.: Noise2self: Blind denoising by self-supervision. In: International Conference on Machine Learning. pp. 524-533. PMLR (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.344, + 0.785, + 0.384 + ], + "angle": 0, + "content": "4. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play admm for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging 3(1), 84-98 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.385, + 0.785, + 0.425 + ], + "angle": 0, + "content": "5. Choi, J., Kim, S., Jeong, Y., Gwon, Y., Yoon, S.: Ilvr: Conditioning method for denoising diffusion probabilistic models. in 2021 ieee. In: CVF international conference on computer vision (ICCV). pp. 14347-14356 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.426, + 0.785, + 0.467 + ], + "angle": 0, + "content": "6. Chung, H., Kim, J., Mccann, M.T., Klasky, M.L., Ye, J.C.: Diffusion posterior sampling for general noisy inverse problems. International Conference on Learning Representations (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.468, + 0.785, + 0.509 + ], + "angle": 0, + "content": "7. Chung, H., Sim, B., Ryu, D., Ye, J.C.: Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.51, + 0.785, + 0.564 + ], + "angle": 0, + "content": "8. Chung, H., Sim, B., Ye, J.C.: Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.565, + 0.785, + 0.605 + ], + "angle": 0, + "content": "9. Cohen, R., Blau, Y., Freedman, D., Rivlin, E.: It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Advances in Neural Information Processing Systems 34, 18152-18164 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.606, + 0.785, + 0.647 + ], + "angle": 0, + "content": "0. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.648, + 0.785, + 0.674 + ], + "angle": 0, + "content": "1. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.675, + 0.785, + 0.715 + ], + "angle": 0, + "content": "2. Ghael, S., Sayeed, A.M., Baraniuk, R.G.: Improved wavelet denoising via empirical wiener filtering. In: SPIE Technical Conference on Wavelet Applications in Signal Processing (1997)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.716, + 0.785, + 0.757 + ], + "angle": 0, + "content": "3. Graikos, A., Malkin, N., Jojic, N., Samaras, D.: Diffusion models as plug-and-play priors. Advances in Neural Information Processing Systems 35, 14715-14728 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.758, + 0.785, + 0.799 + ], + "angle": 0, + "content": "4. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th international conference on international conference on machine learning. pp. 399-406 (2010)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.799, + 0.785, + 0.84 + ], + "angle": 0, + "content": "5. Hammernik, K., Küstner, T., Yaman, B., Huang, Z., Rueckert, D., Knoll, F., Akçakaya, M.: Physics-driven deep learning for computational magnetic resonance imaging. IEEE Sig Proc Mag 40, 98-114 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.246, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.483, + 0.129 + ], + "angle": 0, + "content": "Y. U. Alçalar and M. Akçakaya" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "16. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.175, + 0.785, + 0.217 + ], + "angle": 0, + "content": "17. Hoogeboom, E., Nielsen, D., Jaini, P., Forre, P., Welling, M.: Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems 34, 12454-12465 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.217, + 0.785, + 0.257 + ], + "angle": 0, + "content": "18. Kadkhodaie, Z., Simoncelli, E.: Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. Advances in Neural Information Processing Systems 34, 13242-13254 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.257, + 0.785, + 0.298 + ], + "angle": 0, + "content": "19. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.298, + 0.785, + 0.339 + ], + "angle": 0, + "content": "20. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 4396-4405 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.339, + 0.785, + 0.366 + ], + "angle": 0, + "content": "21. Kawar, B., Elad, M., Ermon, S., Song, J.: Denoising diffusion restoration models. In: Advances in Neural Information Processing Systems (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.366, + 0.785, + 0.407 + ], + "angle": 0, + "content": "22. Knoll, F., Hammernik, K., Zhang, C., Moeller, S., Pock, T., Sodickson, D.K., Akçakaya, M.: Deep learning methods for parallel magnetic resonance imaging reconstruction. IEEE Sig Proc Mag 37, 128-140 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.407, + 0.785, + 0.447 + ], + "angle": 0, + "content": "23. Krull, A., Buchholz, T.O., Jug, F.: Noise2void-learning denoising from single noisy images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2129-2137 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.447, + 0.785, + 0.488 + ], + "angle": 0, + "content": "24. Laumont, R., Bortoli, V.D., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: Bayesian imaging using plug & play priors: when Langevin meets tweedie. SIAM Journal on Imaging Sciences 15(2), 701-737 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.488, + 0.785, + 0.515 + ], + "angle": 0, + "content": "25. Mardani, M., Song, J., Kautz, J., Vahdat, A.: A variational perspective on solving inverse problems with diffusion models. arXiv preprint arXiv:2305.04391 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.515, + 0.785, + 0.556 + ], + "angle": 0, + "content": "26. Moran, N., Schmidt, D., Zhong, Y., Coady, P.: Noisier2noise: Learning to denoise from unpaired noisy data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12064-12072 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.556, + 0.785, + 0.597 + ], + "angle": 0, + "content": "27. Qu, Y., Zheng, N., Li, C.: Using wavelet transform to estimate the eigenfunctions of karhunen-loeve expansion. In: Wavelet Analysis and Its Applications, and Active Media Technology, pp. 39-44. World Scientific (2004)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.597, + 0.785, + 0.638 + ], + "angle": 0, + "content": "28. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International conference on machine learning. pp. 2256-2265. PMLR (2015)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.638, + 0.785, + 0.678 + ], + "angle": 0, + "content": "29. Song, B., Kwon, S.M., Zhang, Z., Hu, X., Qu, Q., Shen, L.: Solving inverse problems with latent diffusion models via hard data consistency. arXiv preprint arXiv:2307.08123 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.678, + 0.785, + 0.706 + ], + "angle": 0, + "content": "30. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. International Conference on Learning Representations (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.706, + 0.785, + 0.746 + ], + "angle": 0, + "content": "31. Song, J., Vahdat, A., Mardani, M., Kautz, J.: Pseudoinverse-guided diffusion models for inverse problems. In: International Conference on Learning Representations (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.746, + 0.785, + 0.773 + ], + "angle": 0, + "content": "32. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.773, + 0.785, + 0.8 + ], + "angle": 0, + "content": "33. Song, Y., Shen, L., Xing, L., Ermon, S.: Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.8, + 0.785, + 0.84 + ], + "angle": 0, + "content": "34. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. International Conference on Learning Representations (2020)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.442, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Approximate Posterior Sampling" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "35. Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., Hardt, M.: Test-time training with self-supervision for generalization under distribution shifts. In: International conference on machine learning. pp. 9229-9248. PMLR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.19, + 0.787, + 0.217 + ], + "angle": 0, + "content": "36. Taam, W., Yandell, B.S.: Approximate Diagonalization of Spatial Covariance. University of Wisconsin, Department of Statistics (1987)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.218, + 0.786, + 0.259 + ], + "angle": 0, + "content": "37. Tumanyan, N., Geyer, M., Bagon, S., Dekel, T.: Plug-and-play diffusion features for text-driven image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1921-1930 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.26, + 0.787, + 0.286 + ], + "angle": 0, + "content": "38. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9446-9454 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.288, + 0.786, + 0.314 + ], + "angle": 0, + "content": "39. Vincent, P.: A connection between score matching and denoising autoencoders. Neural computation 23(7), 1661-1674 (2011)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.315, + 0.787, + 0.355 + ], + "angle": 0, + "content": "40. Wang, Y., Yu, J., Zhang, J.: Zero-shot image restoration using denoising diffusion null-space model. The Eleventh International Conference on Learning Representations (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.357, + 0.787, + 0.397 + ], + "angle": 0, + "content": "41. Yaman, B., Hosseini, S.A.H., Moeller, S., Ellermann, J., Ugurbil, K., Akçakaya, M.: Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 84(6), 3172-3191 (Dec 2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.398, + 0.787, + 0.424 + ], + "angle": 0, + "content": "42. Yaman, B., Hosseini, S.A.H., Akçakaya, M.: Zero-shot self-supervised learning for MRI reconstruction. Proc ICLR (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.425, + 0.787, + 0.466 + ], + "angle": 0, + "content": "43. Yang, L., Ding, S., Cai, Y., Yu, J., Wang, J., Shi, Y.: Guidance with spherical gaussian constraint for conditional diffusion. In: International Conference on Machine Learning (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.467, + 0.787, + 0.508 + ], + "angle": 0, + "content": "44. Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R., Gool, L.V.: Denoising diffusion models for plug-and-play image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE) (2023)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.508 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_origin.pdf b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..df5d0221155456984e88a23dc6aa88582f28df66 --- /dev/null +++ b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/f00e0c27-794a-46e9-88e3-064bc5a755d6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c11fb15d52db3d15882739f26bcfad5a644e4d89c55c1305b49dcd38faa3ccb9 +size 5021744 diff --git a/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/full.md b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..67c8bf8a9dc212f8ad6ad2baab87485428b4be4f --- /dev/null +++ b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/full.md @@ -0,0 +1,363 @@ +# Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems + +Yasar Utku Alçalar and Mehmet Akçakaya + +University of Minnesota, Minneapolis {alcal029, akcakaya}@umn.edu + +Abstract. Diffusion models have emerged as powerful generative techniques for solving inverse problems. Despite their success in a variety of inverse problems in imaging, these models require many steps to converge, leading to slow inference time. Recently, there has been a trend in diffusion models for employing sophisticated noise schedules that involve more frequent iterations of timesteps at lower noise levels, thereby improving image generation and convergence speed. However, application of these ideas for solving inverse problems with diffusion models remain challenging, as these noise schedules do not perform well when using empirical tuning for the forward model log-likelihood term weights. To tackle these challenges, we propose zero-shot approximate posterior sampling (ZAPS) that leverages connections to zero-shot physics-driven deep learning. ZAPS fixes the number of sampling steps, and uses zero-shot training with a physics-guided loss function to learn log-likelihood weights at each irregular timestep. We apply ZAPS to the recently proposed diffusion posterior sampling method as baseline, though ZAPS can also be used with other posterior sampling diffusion models. We further approximate the Hessian of the logarithm of the prior using a diagonalization approach with learnable diagonal entries for computational efficiency. These parameters are optimized over a fixed number of epochs with a given computational budget. Our results for various noisy inverse problems, including Gaussian and motion deblurring, inpainting, and super-resolution show that ZAPS reduces inference time, provides robustness to irregular noise schedules and improves reconstruction quality. Code is available at https://github.com/ualcalar17/ZAPS. + +Keywords: Diffusion Models $\cdot$ Zero-Shot Learning $\cdot$ Inverse Problems $\cdot$ Plug-and-Play (PnP) Methods $\cdot$ Unrolled Networks $\cdot$ Bayesian Methods + +# 1 Introduction + +The forefront of deep generative models is now dominated by diffusion models [16, 28, 30, 32, 34] in the intricate task of image generation [11]. Their capabilities extend across various domains, including computer vision [2], natural language processing [17] and temporal data modeling [1]. Recently, diffusion models also showed great success in solving noiseless [5, 7, 33, 34] and noisy inverse problems [6, 21, 29, 31], owing to their capability to model complicated + +![](images/191e42ff0a9223d261c4890a46d71f7545d81a39d49b0f60235d0989fde8cef7.jpg) + +![](images/8c57d1fb38e0e1e293dca4c86c4217f235f55b044f8ab5ea97bb898d85e1410f.jpg) + +![](images/62b755b46a51233b3243b509c6b05f96d0a66578505214f2a96b90101bfefcdb.jpg) +Fig. 1: Representative results of our algorithm for four distinct noisy inverse problems $(\sigma = 0.05)$ , showing the ground truth (GT), measurement and reconstruction. + +![](images/1056d6052d01fdcd1d6b2babeeeb7b74701abf7c789a256e5a1ea8ab184b3cdc.jpg) + +high-dimensional distributions. Linear inverse problems utilize a known forward model given by + +$$ +\mathbf {y} = \mathbf {A} \mathbf {x} _ {0} + \mathbf {n}, +$$ + +and aim to deduce the underlying signal/image $\mathbf{x}_0\in \mathbb{R}^n$ from measurements $\mathbf{y}\in \mathbb{R}^{m}$ , where $\mathbf{n}\in \mathbb{R}^m$ is measurement noise. In practical situations, the forward operator $\mathbf{A}:\mathbb{R}^n\to \mathbb{R}^m$ is either incomplete or ill-conditioned, necessitating the use of prior information about the signal. Posterior sampling approaches use diffusion models as generative priors and incorporates information from both the data distribution and the forward physics model, allowing for sampling from the posterior distribution $p(\mathbf{x}|\mathbf{y})$ using the given measurement $\mathbf{y}$ [21]. In this context, using Bayes' rule, $p(\mathbf{x}|\mathbf{y}) = \frac{p(\mathbf{x})p(\mathbf{y}|\mathbf{x})}{p(\mathbf{y})}$ , the problem-specific score is + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {x} | \mathbf {y}) = \nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {x}) + \nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {y} | \mathbf {x}), \tag {1} +$$ + +where $\nabla_{\mathbf{x}_t}\log p(\mathbf{x})$ is approximated via the learned score model $s_\theta (\mathbf{x}_t,t)$ . Many of these strategies utilize a plug-and-play (PnP) approach, using a pre-trained unconditional diffusion model as a prior [4, 9, 13, 18, 24, 37], and integrate the forward model during inference to address various inverse problem tasks. + +The complexity for these approaches arises in obtaining the latter forward model log-likelihood term in Eq. (1), which guides the diffusion to a target + +class [11, 28]. While exact calculation is intractable, several approaches have been proposed to approximate this term. Among these, RED-diff [25] employs a variational sampler that uses a combination of measurement consistency loss and score matching regularization. Another technique, DSG [43], uses a spherical Gaussian constraint for denoising steps, allowing for larger step sizes. A class of methods utilize projections onto the convex measurement subspace after the unconditional update through score model [5, 8, 34]. Although these projections improve consistency between measurements and the sample, they are noted to lead to artifacts, such as boundary effects [7]. Thus, more recent approaches aimed to approximate the log-likelihood term in Eq. (1) different ways. Noting + +$$ +p _ {t} (\mathbf {y} \mid \mathbf {x} _ {t}) = \int_ {\mathbf {x} _ {0}} p \left(\mathbf {x} _ {0} \mid \mathbf {x} _ {t}\right) p \left(\mathbf {y} \mid \mathbf {x} _ {0}\right) d \mathbf {x} _ {0}, \tag {2} +$$ + +DPS [6] uses the posterior mean $\hat{\mathbf{x}}_0 = \hat{\mathbf{x}}_0(\mathbf{x}_t) \triangleq \mathbb{E}[\mathbf{x}_0|\mathbf{x}_t] = \mathbb{E}_{\mathbf{x}_0 \sim p(\mathbf{x}_0|\mathbf{x}_t)}[\mathbf{x}_0]$ , to approximate $p(\mathbf{y}|\mathbf{x}_t) = \mathbb{E}_{\mathbf{x}_0 \sim p(\mathbf{x}_0|\mathbf{x}_t)}[p(\mathbf{y}|\mathbf{x}_0)]$ as + +$$ +p (\mathbf {y} | \mathbf {x} _ {t}) = \mathbb {E} _ {\mathbf {x} _ {0} \sim p (\mathbf {x} _ {0} | \mathbf {x} _ {t})} [ p (\mathbf {y} | \mathbf {x} _ {0}) ] \simeq p \Big (\mathbf {y} | \mathbb {E} _ {\mathbf {x} _ {0} \sim p (\mathbf {x} _ {0} | \mathbf {x} _ {t})} [ \mathbf {x} _ {0} ] \Big) = p (\mathbf {y} | \hat {\mathbf {x}} _ {0}). +$$ + +Another technique, IIGDM [31] approximates Eq. (2) as a Gaussian centered around $\mathbf{A}\hat{\mathbf{x}}_0$ + +$$ +\int_ {\mathbf {x} _ {0}} p (\mathbf {x} _ {0} | \mathbf {x} _ {t}) p (\mathbf {y} | \mathbf {x} _ {0}) \mathbf {d} \mathbf {x} _ {0} \simeq \mathcal {N} (\mathbf {A} \hat {\mathbf {x}} _ {0}, r _ {t} ^ {2} \mathbf {A} \mathbf {A} ^ {\top} + \sigma_ {y} ^ {2} \mathbf {I}), \tag {3} +$$ + +and uses it for guidance. In these works, log-likelihood weights (or gradient step sizes), $\{\zeta_t\}$ are introduced to further control the reconstruction as + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {x} | \mathbf {y}) = \nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {x}) + \zeta_ {t} \nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {y} | \mathbf {x}). \tag {4} +$$ + +While DPS demonstrates high performance in various inverse problem tasks, it suffers from the drawback of requiring a large number of sampling steps, resulting in prolonged reconstruction time. IIGDM accelerates this process by adopting regular (linear) jumps approach across the schedule. However, utilizing more complicated schedules, where the jumps are irregular introduces a challenge, as it requires distinct log-likelihood weights, $\zeta_t$ , for each timestep. Heuristic adjustment of these weights is difficult and frequently leads to undesirable outcomes. In this work, by taking an inspiration from zero-shot/test-time self-supervised models [35,42] we propose to learn the log-likelihood weights for a fixed number of sampling steps and fine-tune them over a few epochs. It is crucial to note that fine-tuning DPS (or IIGDM) entails saving computational graphs for each unroll, leading to memory issues and slow backpropagation. Thus, we also propose to approximate the Hessian of the data probability using a wavelet-based diagonalization strategy [12], and learn these diagonal values for each timestep as well. Fig. 1 shows representative results for our method. Our key contributions include: + +- We introduce zero-shot approximate posterior sampling (ZAPS), leveraging zero-shot learning for dynamic automated hyperparameter tuning in the inference phase to improve solution of noisy inverse problems via diffusion + +models. This method fortifies the robustness of the sampling process, attaining a state-of-the-art performance [6, 21, 31] in sampling outcomes. To the best of our knowledge, our method is the first attempt to learn the log-likelihood weights for solving inverse problems via diffusion models by using a measurement-consistent loss when the sampling noise schedule consists of irregular jumps across timesteps. + +- We provide a well-designed approximation for the Hessian of the logarithm of the prior, enabling a computationally efficient and trainable posterior computation. +- We showcase the efficacy of incorporating a learnable log-likelihood weights for each diffusion step during the reverse diffusion process through both quantitative and qualitative assessments on FFHQ and ImageNet datasets. Our approach not only outperforms state-of-the-art, but it also substantially reduces the required number of sampling steps from 1000 to $\sim 20$ -to-30, facilitating convergence with fewer total neural function evaluations (NFEs). + +# 2 Related Works + +Diffusion Models. During training, diffusion models [16, 34] add Gaussian noise to an image with a fixed increasing variance schedule, e.g. linear or exponential, $\beta_{1},\beta_{2},\dots,\beta_{T}$ until pure noise is obtained, and learns a reverse diffusion process, where a neural network is trained to gradually remove noise and reconstruct the original image. Let $\mathbf{x}_0\sim p_{\mathrm{data}}(x)$ be samples from the data distribution, and $\mathbf{x}_{\{1:T\}}\in \mathbb{R}^d$ be noisy latent variables. By taking $\alpha_{t} = 1 - \beta_{t}$ and $\bar{\alpha}_{t} = \prod_{s = 1}^{t}\alpha_{s}$ , the Markovian forward process can be written as + +$$ +q \left(\mathbf {x} _ {t} \mid \mathbf {x} _ {0}\right) = \mathcal {N} \left(\mathbf {x} _ {t} \mid \sqrt {\bar {\alpha} _ {t}} \mathbf {x} _ {0}, (1 - \bar {\alpha} _ {t}) \mathbf {I}\right). \tag {5} +$$ + +By using the reparameterization trick and Eq. (5), $\mathbf{x}_t$ can be sampled as + +$$ +\mathbf {x} _ {t} \left(\mathbf {x} _ {0}, \epsilon\right) = \sqrt {\bar {\alpha} _ {t}} \mathbf {x} _ {0} + \sqrt {1 - \bar {\alpha} _ {t}} \epsilon \quad \text {w h e r e} \quad \epsilon \sim \mathcal {N} (\epsilon ; 0, \mathbf {I}). \tag {6} +$$ + +Consequently, denoising diffusion probabilistic models (DDPMs) [16] learns the reverse process by minimizing a lower bound on the log prior via: + +$$ +L _ {t} (\theta) = \mathbb {E} _ {t, \mathbf {x} _ {0}, \epsilon} \| \epsilon - \epsilon_ {\theta} \left(\mathbf {x} _ {t} \left(\mathbf {x} _ {0}, \epsilon\right), t\right) \| _ {2} ^ {2}. \tag {7} +$$ + +Furthermore, it can be shown that epsilon matching in Eq. (7) is analogous to the denoising score matching (DSM) [32,39] objective up to a constant: + +$$ +\min _ {\theta} \mathbb {E} _ {\mathbf {x} _ {t}, \mathbf {x} _ {0}, \epsilon} \| \mathbf {s} _ {\theta} (\mathbf {x} _ {t}, t) - \nabla_ {\mathbf {x} _ {t}} \log q (\mathbf {x} _ {t} | \mathbf {x} _ {0}) \| _ {2} ^ {2}, \tag {8} +$$ + +in which $\mathbf{s}_{\theta}(\mathbf{x}_t,t) = -\frac{\epsilon_{\theta}(\mathbf{x}_t,t)}{\sqrt{1 - \bar{\alpha}_t}}$ . Using Tweedie's formula and Eq. (6), posterior mean for $p(\mathbf{x}_0|\mathbf{x}_t)$ can be found as: + +$$ +\hat {\mathbf {x}} _ {0} = \frac {1}{\sqrt {\bar {\alpha} _ {t}}} \left(\mathbf {x} _ {t} + (1 - \bar {\alpha} _ {t}) \mathbf {s} _ {\theta} (\mathbf {x} _ {t}, t)\right). \tag {9} +$$ + +Sampling $\mathbf{x}_{t + 1}$ from $p(\mathbf{x}_{t + 1}|\mathbf{x}_t)$ can be done using ancestral sampling by iteratively computing: + +$$ +\mathbf {x} _ {t - 1} = \frac {1}{\sqrt {\alpha_ {t}}} \left(\mathbf {x} _ {t - 1} - \frac {1 - \alpha_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \boldsymbol {\epsilon} _ {\theta} (\mathbf {x} _ {t}, t)\right) + \sigma_ {t} \mathbf {z}, \tag {10} +$$ + +where $\mathbf{z} \sim \mathcal{N}(0, \mathbf{I})$ and $\sigma_t^2 = \tilde{\beta}_t = \frac{1 - \bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \beta_t$ . It is also worth noting that the DDPM is equivalent to the variance preserving stochastic differential equations (VP-SDEs) [34]. + +Solving Inverse Problems via Diffusion Models. When solving inverse problems via diffusion models, the main challenge is to find an approximation to the log-likelihood term, $\nabla_{\mathbf{x}_t}\log p(\mathbf{y}|\mathbf{x})$ , as discussed earlier. One recent method, denoising diffusion restoration models (DDRM) [21], utilizes a spectral domain approach, allowing the incorporation of noise from the measurement domain into the spectral domain through singular value decomposition (SVD). However, the application of SVD is computationally expensive [6]. Manifold Constrained Gradient (MCG) [7] method applies projections after the MCG correction as: + +$$ +\mathbf {x} _ {t - 1} ^ {\prime} = f (\mathbf {x} _ {t}, \mathbf {s} _ {\theta}) - \zeta \nabla_ {\mathbf {x} _ {t}} \| \mathbf {K} (\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}) \| _ {2} ^ {2} + g (\mathbf {x} _ {t}) \mathbf {z}, \quad \mathbf {z} \sim \mathcal {N} (0, \mathbf {I}), \tag {11} +$$ + +$$ +\mathbf {x} _ {t - 1} = \mathbf {H} \mathbf {x} _ {t - 1} + \mathbf {b}, \tag {12} +$$ + +where $\zeta$ and $\mathbf{H}$ are dependent on noise covariance. MCG update of Eq. (11) projects estimates onto the measurement subspace, thus they may fall off from the data manifold [6]. Hence, DPS proposes to update without projections as: + +$$ +\mathbf {x} _ {t - 1} = \mathbf {x} _ {t - 1} ^ {\prime} - \zeta_ {t} \nabla_ {\mathbf {x} _ {t}} \| \mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0} \| _ {2} ^ {2}, \tag {13} +$$ + +Note Eq. (13) is equivalent to Eq. (11) when $\mathbf{K} = \mathbf{I}$ , and it reduces to the following when the forward operator is linear: + +$$ +\mathbf {x} _ {t - 1} = \mathbf {x} _ {t - 1} ^ {\prime} + \zeta_ {t} \frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} \mathbf {A} ^ {\top} (\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}) \tag {14} +$$ + +IIGDM [31], on the other hand, utilizes a Gaussian centered around $\hat{\mathbf{x}}_0$ that is defined in Eq. (9) to obtain the following score approximation: + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p _ {t} (\mathbf {y} | \mathbf {x} _ {t}) \simeq \frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} \mathbf {A} ^ {\top} \left(r _ {t} ^ {2} \mathbf {A} \mathbf {A} ^ {\top} + \sigma_ {y} ^ {2} \mathbf {I}\right) ^ {- 1} (\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}). \tag {15} +$$ + +In cases where there is no measurement noise $(\sigma_y = 0)$ , Eq. (15) simplifies to: + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p _ {t} (\mathbf {y} | \mathbf {x} _ {t}) \simeq r _ {t} ^ {- 2} \frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} \mathbf {A} ^ {\dagger} (\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}) \tag {16} +$$ + +where $\mathbf{A}^{\dagger}$ denotes the Moore-Penrose pseudoinverse of $\mathbf{A}$ . We note that using Woodbury matrix identity (derived in SuppMat), one can simplify Eq. (15) to: + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p _ {t} (\mathbf {y} | \mathbf {x} _ {t}) \simeq \frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} \left(\mathbf {A} ^ {\top} \mathbf {A} + \eta \mathbf {I}\right) ^ {- 1} \mathbf {A} ^ {\top} \left(\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}\right), \quad \text {w h e r e} \eta = \frac {\sigma_ {y} ^ {2}}{r _ {t} ^ {2}}. \tag {17} +$$ + +From Eq. (17), the similarity between DPS and IIGDM updates can be seen, with $(\mathbf{A}^{\top}\mathbf{A} + \eta \mathbf{I})^{-1}$ term being the difference. Note the DPS update in Eq. (13) works with non-linear operators, while IIGDM's update does not rely on the differentiability of the forward operator, as long as a pseudo-inverse-like operation can be derived. + +Improved Irregular Noise Schedules for Image Generation. Diffusion models typically utilize well-defined fixed noise schedules, with examples including linear or exponential ones. Lately, more sophisticated methods have been developed that sweep across these schedules and take samples in irregular timesteps [11,19] for unconditional image generation. The idea behind this strategy hinges on more frequent sampling for lower noise levels, making it possible to use considerably less number of sampling steps. + +Most of the aforementioned studies that solve inverse problems via diffusion models used the same number of steps that the unconditional diffusion model was trained for [6,7,34]. Nonetheless, there has been a notable trend favoring shorter schedules characterized by linear jumps for inverse problems, where the log-likelihood weights were hand-tuned by trial-and-error [25,31] when using reduced number of steps. While these approaches have proven effective, they still require a large number of sampling steps or heuristic tuning of the log-likelihood weights, $\{\zeta_t\}$ in Eq. (4) to achieve good performance. The former issue leads to lengthy and potentially impractical computational times, while the latter issue results in generalizability difficulties for adoption at different measurement noise levels and variations in the measurement operators. Furthermore, the irregular jump strategy that has been powerful for image generation has not garnered significant attention for inverse problems, mainly due to the impracticality of empirically tuning the log-likelihood weights. Thus, a method that automatically selects and adjusts log-likelihood weights based on the provided measurements for arbitrary noise schedules, instead of requiring manual tuning, holds significant potential for improving robustness and image quality. + +# 3 Methodology + +# 3.1 Zero-shot Fine Tuning of Log-Likelihood Weights + +In this work, we propose a robust automated approach for setting the log-likelihood weights at each timestep for arbitrary noise sampling schedules to improve posterior sampling with the given measurements during inference. This allows for a stable reconstruction for different sweeps across noise schedules. Furthermore, the weights themselves are image-specific, which improves the performance compared to the former approaches. For estimating the likelihood in Eq. (1), we use the update in DPS [6]: + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {y} | \mathbf {x} _ {t}) \simeq \nabla_ {\mathbf {x} _ {t}} \| \mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0} \| _ {2} ^ {2} = - \frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} \mathbf {A} ^ {\top} (\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}), \tag {18} +$$ + +![](images/242ba34d8be7399d5f13e12aca23330871721cfa86e1c2fb615b139b45b810be.jpg) +Fig. 2: Our zero-shot approximate posterior sampling (ZAPS) approach unrolls the sampling process for a fixed number of $S$ steps for arbitrary/irregular noise schedules, alternating between score model sampling (SMS) and likelihood guidance (LG). Our zero-shot fine-tuning approach has two key components: 1) The Hessian of the log prior is approximated using a discrete wavelet transform diagonalization technique, 2) Both the diagonal matrices, $\{\mathbf{D}_t\}$ and the log-likelihood weights, $\{\zeta_t\}$ are updated during fine-tuning. The fine-tuning is done for a fixed number of epochs with a given NFE budget, yielding a faster and more robust adaptive inverse problem solver. + +although as noted before, the IIGDM [31] update in Eq. (17) is also similar. Thus we emphasize that while we chose DPS as baseline for its versatility in inverse problems, our ZAPS strategy is applicable to other diffusion models for inverse problems. Recalling the definition of $\hat{\mathbf{x}}_0$ in Eq. (9), we note + +$$ +\frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} = \frac {1}{\sqrt {\bar {\alpha} _ {t}}} \left(\mathbf {I} + (1 - \bar {\alpha} _ {t}) \frac {\partial \mathbf {s} _ {\theta} (\mathbf {x} _ {t} , t)}{\partial \mathbf {x} _ {t}}\right). \tag {19} +$$ + +Thus, ignoring the calculation and storage of the matrix $\frac{\partial\mathbf{s}_{\theta}(\mathbf{x}_t,t)}{\partial\mathbf{x}_t}$ for now, one needs to fine tune the log-likelihood weights $\{\zeta_t\}$ in + +$$ +\nabla_ {\mathbf {x} _ {t}} \log p (\mathbf {x}) + \zeta_ {t} \frac {1}{\sqrt {\bar {\alpha} _ {t}}} \left(\mathbf {I} + (1 - \bar {\alpha} _ {t}) \frac {\partial \mathbf {s} _ {\theta} (\mathbf {x} _ {t} , t)}{\partial \mathbf {x} _ {t}}\right) \mathbf {A} ^ {\top} (\mathbf {y} - \mathbf {A} \hat {\mathbf {x}} _ {0}). \qquad (2 0) +$$ + +This is done based on the concept of algorithm unrolling [14, 15, 22] in physics-driven deep learning by fixing the number of sampling steps $T$ . Then the whole posterior sampling process is described as alternating between DDPM sampling using the pre-trained unconditional score model, followed by the log-likelihood term guidance in Eq. (20) for $T$ steps. This "unrolled" network is fine-tuned end-to-end, where the only updates are made to $\{\zeta_t\}$ and no fine-tuning is performed on the unconditional score function, $\mathbf{s}_{\theta}(\mathbf{x}_t,t)$ . This also alleviates the need for backpropagation across the score function network, leading to further savings in computational time. The fine-tuning is performed using a physics-inspired loss + +Algorithm 1 ZAPS: Zero-Shot Approximate Posterior Sampling +Require: $T,\mathbf{y},\{\tilde{\sigma}_i\}_{i = 1}^T$ orthogonal DWT (W) +1: $\mathbf{x}_T\sim \mathcal{N}(\mathbf{0},\mathbf{I})$ +2: $\tau \subset [1,\dots,T]$ extending over a length of $S < T$ +3: for epoch in range(epochs) do +4: for $i = S,\ldots ,1$ do +5: $\hat{\mathbf{s}}\gets \mathbf{s}_{\theta}(\mathbf{x}_{\tau_i},\tau_i)$ ▷ Score computation +6: $\hat{\mathbf{x}}_0\leftarrow \frac{1}{\sqrt{\bar{\alpha}_{\tau_i}}} (\mathbf{x}_{\tau_i} + (1 - \bar{\alpha}_{\tau_i})\hat{\mathbf{s}})$ Tweedie denoising +7: $\mathbf{z}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$ if $\tau_{i} > 1$ , else $\mathbf{z} = \mathbf{0}$ +8: $\mathbf{x}_{\tau_i - 1}'\gets \frac{\sqrt{\alpha_{\tau_i}}(1 - \bar{\alpha}_{\tau_i - 1})}{1 - \bar{\alpha}_{\tau_i}}\mathbf{x}_{\tau_i} + \frac{\sqrt{\bar{\alpha}_{\tau_i - 1}}\beta_{\tau_i}}{1 - \bar{\alpha}_{\tau_i}}\hat{\mathbf{x}}_0 + \tilde{\sigma}_{\tau_i}\mathbf{z}$ +9: $\mathbf{x}_{\tau_{i - 1}}\gets \mathbf{x}_{\tau_{i - 1}}' + \zeta_{\tau_i}\left(\left(\frac{1}{\sqrt{\bar{\alpha}_{\tau_i}}}\Bigl {(}\mathbf{I} + (1 - \bar{\alpha}_{\tau_i})\mathbf{WD}_{\tau_i}\mathbf{W}^\top \Bigr)\right)\cdot \mathbf{A}^\top (\mathbf{y} - \mathbf{A}\hat{\mathbf{x}}_0)\right)$ +10: end for +11: Update network parameters $\{\zeta_t\}$ and $\{\mathbf{D}_t\}$ +12: end for +13: return ${\bf x}_0$ + +function that evaluates the consistency of the final estimate and the measurements: $\mathcal{L}(\mathbf{y},\mathbf{x}_0) = ||\mathbf{y} - \mathbf{A}\mathbf{x}_0||_2^2$ . High-level explanation for our algorithm is given in Fig. 2. + +# 3.2 Approximation for the Hessian of the Log Prior + +Implementing the zero-shot update for Eq. (20) poses various challenges, since backpropagation through the unrolled network to update all $\{\zeta_t\}$ requires another backpropagation through the Jacobian of the score function at each time step. This can only be done by retaining the computational graphs that are created when calculating the Jacobian term in Eq. (20), which quickly explodes memory requirements, especially when the number of sampling steps increases. Also, backpropagating through multiple graphs at the end to only update the log-likelihood weights is time-inefficient and causes prolonged sampling times. Hence, we propose to approximate the Jacobian using inspirations from wavelet-based signal processing techniques and propose to learn this approximation to improve the overall outcome. Noting that $\mathbf{s}_{\theta}(\mathbf{x}_t,t)$ in Eq. (19) is an approximation of the log-gradient of the true prior $p(\mathbf{x})$ , we have + +$$ +\frac {\partial \hat {\mathbf {x}} _ {0}}{\partial \mathbf {x} _ {t}} = \frac {1}{\sqrt {\bar {\alpha} _ {t}}} \left(\mathbf {I} + \left(1 - \bar {\alpha} _ {t}\right) \frac {\partial^ {2} \log p _ {t} (\mathbf {x} _ {t})}{\partial \mathbf {x} _ {t} ^ {2}}\right). \tag {21} +$$ + +In order to make a backpropagation to update these weights, one needs to calculate the Hessian matrix, $\frac{\partial^2\log p_t(\mathbf{x}_t)}{\partial\mathbf{x}_t^2}$ given in Eq. (21). This matrix is the negative of the observed Fisher information matrix, whose expected value is the Fisher information matrix. It is also known that in the limit, it approximates + +the inverse covariance matrix of the maximum likelihood estimator. Furthermore, under mild assumptions about continuity of the prior, the observed Fisher information matrix is symmetric. Thus, an appropriate decorrelating unitary matrix can be used to diagonalize it. While finding the desired unitary matrix is equally time-consuming as calculating this Hessian, several pre-determined unitary transforms have been proposed for decorrelation in the signal processing community for different applications [12, 27, 36]. Of particular note is the use of unitary wavelet transforms for Wiener filtering [12], where these transforms were utilized for their tendency to decorrelate data, i.e. approximate the Karhunen-Loeve transform [27]. In this work, we also use these decorrelating properties to approximately diagonalize the Hessian of the log prior, $\frac{\partial^2\log p_t(\mathbf{x}_t)}{\partial\mathbf{x}_t^2}$ using fixed orthogonal discrete wavelet transforms (DWT): + +$$ +\frac {\partial^ {2} \log p _ {t} (\mathbf {x} _ {t})}{\partial \mathbf {x} _ {t} ^ {2}} \simeq \mathbf {W D} _ {t} \mathbf {W} ^ {\top}, \tag {22} +$$ + +where $\mathbf{W}$ is an orthogonal DWT. By making this approximation, backpropagation through the score model can also be avoided, and only the diagonal values in distinct $\{\mathbf{D}_t\}$ matrices needs to be learned. Our final algorithm to sample from pure noise with fine-tuning is given in Algorithm 1. + +# 4 Evaluation + +# 4.1 Experimental Setup and Implementation Details + +We comprehensively evaluated our method, examining its performance through both qualitative and quantitative analyses using FFHQ [20] and ImageNet [10] datasets with size $256 \times 256 \times 3$ . Pre-trained unconditional diffusion models trained on FFHQ and ImageNet were taken from [5] and [11] respectively, and used without retraining. For our experiments, we sampled 1000 images from FFHQ and ImageNet validation sets. All images underwent pre-processing to be normalized in the range [0, 1]. During all the evaluations, a Gaussian measurement noise with $\sigma = 0.05$ was used. For the orthogonal DWT, Daubechies 4 wavelet was utilized. For our quantitative evaluations, we employed 30 sampling steps with a schedule of "15,10,5", and 10 epochs for fine-tuning, resulting in a total of 300 NFEs. As noted in [11], superior schedules may exist but it requires substantial computational time to try out all possible schedules. Thus, we opted a schedule that is simple, and samples more frequently at the lower noise levels [11]. More details about the network architectures and hyperparameter choices are given in SuppMat. + +# 4.2 Experiments on Linear Inverse Problems + +Problem Setup. We focused on the following linear inverse problems: (1) Gaussian deblurring, (2) inpainting, (3) motion deblurring, (4) super-resolution. For + +![](images/68ac876e9b20b87d5143cb34697d514dd24508f841e050a6921908eb902ce19e.jpg) +Fig. 3: Representative images using various methods for solving Gaussian deblurring, motion deblurring and super-resolution $(\times 4)$ tasks. Proposed method qualitatively improves upon each method, including the baseline state-of-the-art DPS. + +![](images/b8884b8c36364b51787cff387317de22dfd9fb090561818691373d982391b917.jpg) + +Gaussian deblurring, we considered a kernel of size $61 \times 61$ with a standard deviation $\sigma = 3.0$ . For inpainting, we considered two different scenarios wherein we randomly masked out $70\%$ and a $128 \times 128$ box region of the image, applied uniformly across all three channels. For motion blur, we generated the blur kernel via the code1, with $61 \times 61$ kernel size and 0.5 intensity, as in [6]. Finally, for super-resolution, we considered bicubic downsampling. All measurements are obtained through applying the forward model to the ground truth image. + +Comparison Methods. We compared our method with score-SDE [5, 8, 34], manifold constrained gradients (MCG) [7], denoising diffusion restoration models (DDRM) [21], diffusion posterior sampling (DPS) [6] and pseudo-inverse guided diffusion models (IIGDM) [31]. We note that our implementation of score-SDE follows the same strategy as presented in [6]. We referred to the methods that iteratively applied projections onto convex sets (POCS) as score-SDE. Additional comparisons to DDNM [40] and DiffPIR [44] are also provided in SuppMat. All methods were implemented using their respective public repositories. + +Quantitative and Qualitative Results. We evaluated our method quantitatively using learned perceptual image patch similarity (LPIPS) distance, structural similarity index (SSIM), and peak signal-to-noise-ratio (PSNR). Representative results in Fig. 3 show that DDRM yields blurry results in Gaussian deblurring task. DPS improves sharpness across these distinct inverse problem tasks, while ZAPS yields comparable sharpness while exhibiting a higher similarity to the ground truth, all within a third of the total NFEs. + +Representative inpainting results in Fig. 4 show that ZAPS substantially improves upon DDRM, a method that uses a slightly lower 20 timesteps, and + +![](images/2835cfcdf4396fed5657ea1133a42e1902697ccda5b423764b6d789429bc3f45.jpg) +Fig. 4: Illustrative images using state-of-the-art methods for random (70%) and box $(128 \times 128)$ inpainting. Proposed method improves upon DDRM, while achieving similar performance to IIGDM and DPS, with subtle improvements shown in zoomed insets. + +achieves better similarity to the ground truth and sharpness compared to DPS, which uses almost $33 \times$ more steps. Similarly, when compared with IIIGDM, it is evident that our method gives comparable results even though $3 - 4 \times$ fewer number of steps are used. The zoomed insets highlight subtle improvements afforded by our method compared to state-of-the-art DPS and IIIGDM, as seen around the eyes. + +Tab. 1 and Tab. 2 show the three quantitative metrics for all methods, while Tab. 3 illustrates their computational complexity. ZAPS outperforms Score-SDE, MCG, and our baseline state-of-the-art comparison, DPS, in computational complexity and quantitative performance, yielding faster and improved reconstructions. Although DDRM and IIGDM surpass ZAPS in terms of computational complexity, ZAPS outperforms both methods quantitatively in terms of all three metrics. Furthermore, IIGDM could not be implemented reliably for several lin + +Table 1: Quantitative results for Gaussian deblurring and random inpainting (70%) on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task. + +
MethodGaussian DeblurringRandom Inpainting
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1280.71825.200.1040.81128.03
MCG [7]0.5580.50915.120.1450.75425.33
IIGDM [31]---0.0860.84226.62
DDRM [21]0.1830.70224.420.1980.74125.17
Score-SDE [5,8,34]0.5710.49615.170.2240.71824.44
ZAPS (Ours)0.1210.75726.060.0780.81327.79
+ +Table 2: Quantitative results for motion deblurring and super-resolution $(\times 4)$ on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task. + +
MethodMotion DeblurringSuper-Resolution (×4)
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1430.70424.030.1680.71923.86
MCG [7]0.5650.49715.100.2290.62320.74
IIGDM [31]---0.1310.76024.48
DDRM [21]---0.1750.71124.55
Score-SDE [5,8,34]0.5460.48815.020.2570.60919.13
ZAPS (Ours)0.1410.70924.160.1040.76826.63
+ +ear inverse problems related to deblurring. We also note that the parameters in ZAPS are adaptive, meaning one can reach the same computational complexity by adjusting total epochs or steps, in trade-off for a slight decrease in performance, as studied in Sec. 4.3. + +# 4.3 Ablation Studies + +We conducted three distinct ablation studies to investigate critical aspects of our algorithm's performance. The first ablation study compared combinations of different timesteps and epochs with a fixed NFE budget, providing a nuanced exploration into the influence of specific combinations on the model's behavior. Specifically, we explored the reconstruction capabilities of the model qualitatively and quantitatively by varying the length of model timesteps, $S \in \{20, 30, 60\}$ . For a fixed NFE budget of 300, these corresponded to 15, 10 and 5 epochs for zero-shot fine-tuning respectively. Fig. 5a shows the final estimates, while Fig. 5b and Fig. 5c depict the corresponding loss and PSNR curves for each combination (Further quantitative results are in SuppMat). Notably, all the estimates are similar, though sharpness improves slightly as $S$ increases. However, the trade-off for choosing a high $S$ is the low number of epochs. Especially for cases, where the measurement system or noise level changes, this makes fine-tuning susceptible to initialization of the hyperparameters as it is more difficult to converge to a good solution in $\sim 5$ epochs. Thus, for improved generalizability and robustness, we opted to use $S = 30$ and 10 epochs for our database testing. + +Our second ablation study analyzed the performance of ZAPS with respect to other state-of-the-art methods when all methods used the same NFE. We + +Table 3: Computational costs of methods in terms of NFEs and wall-clock time (WCT) + +
DPS [6]MCG [7]IIGDM [31]DDRM [21]Score-SDE [34]ZAPS
Total NFEs10001000100201000300
WCT (s)47.2548.834.532.1223.4714.71
+ +![](images/f64c83c40a88994e2dfaca8aa2cf03c018453506feb33fdb601dd36a44708b88.jpg) + +![](images/34247b72ece3fd158f11739e848980b0f8f1537557345aff54081f9874cd768f.jpg) + +![](images/b5ec4d89e67965124a53b67f88b3f1dcf59387944f65e4d7ef87eb1ce5daac42.jpg) + +![](images/50569c3f807ccef899a707036a78ec87ac4d452541ecc509f33c5e90d7a63822.jpg) + +![](images/599f2ad08f66414ab14bbd05dc763ce73f2e572715b18cd27c6c97d2b6c2e7a4.jpg) + +![](images/4a77e8cb353e83e072792abd2c448dd80a1b0c817e8cbb7c219f9385ea928ca7.jpg) +(a) Re constructions using ZAPS for super-resolution $(\times 4)$ task with different total timesteps-epochs combinations for the same $\mathrm{NFE} = 300$ +(b) Loss graphs for each combination. +Fig. 5: Study on different epochs and sampling steps combinations with fixed NFE. Results show similar quality for combinations with lower timestep approaches staring from higher loss/lower PSNR but converging to similar values. + +![](images/e48bd80e110129c20ea6cd0278a99d0542409bd5a9b310897a5db284d56c8db7.jpg) +(c) PSNR graphs for each combination. + +investigated total NFEs of 100, 300, and 500 to demonstrate the robustness of our approach, given its adaptable parameters, as previously discussed. For 100 NFEs, we applied 20 steps (schedule = "10,7,3") with 5 epochs, whereas for 300 and 500 NFEs, we applied 30 steps (schedule = "15,10,5") and 50 steps (schedule = "30,15,5"), respectively, for 10 epochs. Additionally, we also implemented ZAPS with uniformly spaced noise schedules to highlight the benefits of the proposed irregular noise schedules. As seen in Tabs. 4 and 5, ZAPS with irregular noise schedules outperforms the state-of-the-art methods for NFE budgets of 100, 300 and 500 in super-resolution and random inpainting tasks. We note that we could not perform this test for deblurring experiments as IIGDM could not be implemented reliably across the database, as previously mentioned. We also note that the difference between irregular and uniform noise schedules for ZAPS is + +Table 4: Quantitative results for super-resolution $(\times 4, \sigma = 0.05)$ on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined. + +
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.3440.47816.960.2570.57720.010.2180.62321.52
IIGDM [31]0.1310.76024.480.1170.75824.800.1230.76224.25
ZAPS (Uniform)0.1080.74925.920.1190.72926.290.1150.75625.63
ZAPS (Irregular)0.1060.74126.080.1040.76826.630.0950.77026.26
+ +Table 5: Quantitative results for random inpainting (70%, σ = 0.05) on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined. + +
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.2680.59320.010.1890.70423.740.1520.75425.59
IIGDM [31]0.0860.84226.620.0800.84925.060.0820.84524.94
ZAPS (Uniform)0.1220.78026.200.1270.77325.870.0800.79126.94
ZAPS (Irregular)0.0850.79427.030.0780.81327.790.0710.81828.11
+ +less pronounced for 100 NFEs, but the advantage of irregular schedules becomes apparent for 300 and 500 NFEs. + +The final ablation study, exploring the benefits of using distinct weights $\zeta_t$ for each timestep versus a shared weight $\zeta$ for every step, is provided in SuppMat. + +# 4.4 Limitations + +The loss function we use, $\mathcal{L}(\mathbf{y},\mathbf{x}_0) = ||\mathbf{y} - \mathbf{A}\mathbf{x}_0||_2^2$ , resembles a deep image prior-like loss [38]. However, note that there is a subtle difference in our context, where it corresponds to the log-likelihood of $p(\mathbf{y}|\mathbf{x}_0)$ , which is different then the (approximate) log-likelihood guidance term $p(\mathbf{y}|\mathbf{x}_t)$ used at each time-step. This allows for more robustness to overfitting that is typically observed in DIP-type methods. Further overfitting avoidance measures can be taken by data-splitting [3, 23, 26, 41, 42], though this was not necessary for the small number of epochs used for fine-tuning. Additionally, while our approximation in Eq. (22) produces competitive results, it is important to keep in mind that wavelets may not fully decorrelate the observed Fisher information matrix. Finally, we note that while we chose DPS as a baseline for its versatility in inverse problem tasks, the adaptive weighting strategy in ZAPS, as well as our Hessian approximation, are applicable to other posterior sampling diffusion models for inverse problems. + +# 5 Conclusion + +In this work, we proposed a novel approach named zero-shot approximate posterior sampling (ZAPS), which harnesses zero-shot learning for dynamic automated hyperparameter tuning during the inference phase to enhance the reconstruction quality of solving linear noisy inverse problems using diffusion models. In particular, learning the log-likelihood weights facilitates the usage of more complex and irregular noise schedules, whose feasibility for inverse problems was shown, to the best of our knowledge, for the first time in this paper. These irregular noise schedules enabled high quality reconstructions with $20 - 50 \times$ fewer timesteps. When number of epochs for fine-tuning is also considered, our approach results in a speed boost of approximately $3 \times$ compared to state-of-the-art methods like DPS. Quantitative and qualitative evaluations on natural images illustrate our method's ability to attain state-of-the-art performance across diverse inverse problem tasks. + +# Acknowledgements + +This work was partially supported by NIH R01HL153146 and NIH R01EB032830. + +# References + +1. Alcaraz, J.M.L., Strodthoff, N.: Diffusion-based time series imputation and forecasting with structured state space models. arXiv preprint arXiv:2208.09399 (2022) +2. Baranchuk, D., Rubachev, I., Voynov, A., Khrulkov, V., Babenko, A.: Label-efficient semantic segmentation with diffusion models. International Conference on Learning Representations (2021) +3. Batson, J., Royer, L.: Noise2self: Blind denoising by self-supervision. In: International Conference on Machine Learning. pp. 524-533. PMLR (2019) +4. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play admm for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging 3(1), 84-98 (2016) +5. Choi, J., Kim, S., Jeong, Y., Gwon, Y., Yoon, S.: Ilvr: Conditioning method for denoising diffusion probabilistic models. in 2021 ieee. In: CVF international conference on computer vision (ICCV). pp. 14347-14356 (2021) +6. Chung, H., Kim, J., Mccann, M.T., Klasky, M.L., Ye, J.C.: Diffusion posterior sampling for general noisy inverse problems. International Conference on Learning Representations (2023) +7. Chung, H., Sim, B., Ryu, D., Ye, J.C.: Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems (2022) +8. Chung, H., Sim, B., Ye, J.C.: Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022) +9. Cohen, R., Blau, Y., Freedman, D., Rivlin, E.: It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Advances in Neural Information Processing Systems 34, 18152-18164 (2021) +0. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009) +1. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021) +2. Ghael, S., Sayeed, A.M., Baraniuk, R.G.: Improved wavelet denoising via empirical wiener filtering. In: SPIE Technical Conference on Wavelet Applications in Signal Processing (1997) +3. Graikos, A., Malkin, N., Jojic, N., Samaras, D.: Diffusion models as plug-and-play priors. Advances in Neural Information Processing Systems 35, 14715-14728 (2022) +4. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th international conference on international conference on machine learning. pp. 399-406 (2010) +5. Hammernik, K., Küstner, T., Yaman, B., Huang, Z., Rueckert, D., Knoll, F., Akçakaya, M.: Physics-driven deep learning for computational magnetic resonance imaging. IEEE Sig Proc Mag 40, 98-114 (2023) + +16. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020) +17. Hoogeboom, E., Nielsen, D., Jaini, P., Forre, P., Welling, M.: Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems 34, 12454-12465 (2021) +18. Kadkhodaie, Z., Simoncelli, E.: Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. Advances in Neural Information Processing Systems 34, 13242-13254 (2021) +19. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022) +20. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 4396-4405 (2019) +21. Kawar, B., Elad, M., Ermon, S., Song, J.: Denoising diffusion restoration models. In: Advances in Neural Information Processing Systems (2022) +22. Knoll, F., Hammernik, K., Zhang, C., Moeller, S., Pock, T., Sodickson, D.K., Akçakaya, M.: Deep learning methods for parallel magnetic resonance imaging reconstruction. IEEE Sig Proc Mag 37, 128-140 (2020) +23. Krull, A., Buchholz, T.O., Jug, F.: Noise2void-learning denoising from single noisy images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2129-2137 (2019) +24. Laumont, R., Bortoli, V.D., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: Bayesian imaging using plug & play priors: when Langevin meets tweedie. SIAM Journal on Imaging Sciences 15(2), 701-737 (2022) +25. Mardani, M., Song, J., Kautz, J., Vahdat, A.: A variational perspective on solving inverse problems with diffusion models. arXiv preprint arXiv:2305.04391 (2023) +26. Moran, N., Schmidt, D., Zhong, Y., Coady, P.: Noisier2noise: Learning to denoise from unpaired noisy data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12064-12072 (2020) +27. Qu, Y., Zheng, N., Li, C.: Using wavelet transform to estimate the eigenfunctions of karhunen-loeve expansion. In: Wavelet Analysis and Its Applications, and Active Media Technology, pp. 39-44. World Scientific (2004) +28. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International conference on machine learning. pp. 2256-2265. PMLR (2015) +29. Song, B., Kwon, S.M., Zhang, Z., Hu, X., Qu, Q., Shen, L.: Solving inverse problems with latent diffusion models via hard data consistency. arXiv preprint arXiv:2307.08123 (2023) +30. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. International Conference on Learning Representations (2020) +31. Song, J., Vahdat, A., Mardani, M., Kautz, J.: Pseudoinverse-guided diffusion models for inverse problems. In: International Conference on Learning Representations (2022) +32. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019) +33. Song, Y., Shen, L., Xing, L., Ermon, S.: Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005 (2021) +34. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. International Conference on Learning Representations (2020) + +35. Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., Hardt, M.: Test-time training with self-supervision for generalization under distribution shifts. In: International conference on machine learning. pp. 9229-9248. PMLR (2020) +36. Taam, W., Yandell, B.S.: Approximate Diagonalization of Spatial Covariance. University of Wisconsin, Department of Statistics (1987) +37. Tumanyan, N., Geyer, M., Bagon, S., Dekel, T.: Plug-and-play diffusion features for text-driven image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1921-1930 (2023) +38. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9446-9454 (2018) +39. Vincent, P.: A connection between score matching and denoising autoencoders. Neural computation 23(7), 1661-1674 (2011) +40. Wang, Y., Yu, J., Zhang, J.: Zero-shot image restoration using denoising diffusion null-space model. The Eleventh International Conference on Learning Representations (2023) +41. Yaman, B., Hosseini, S.A.H., Moeller, S., Ellermann, J., Ugurbil, K., Akçakaya, M.: Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 84(6), 3172-3191 (Dec 2020) +42. Yaman, B., Hosseini, S.A.H., Akçakaya, M.: Zero-shot self-supervised learning for MRI reconstruction. Proc ICLR (2021) +43. Yang, L., Ding, S., Cai, Y., Yu, J., Wang, J., Shi, Y.: Guidance with spherical gaussian constraint for conditional diffusion. In: International Conference on Machine Learning (2024) +44. Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R., Gool, L.V.: Denoising diffusion models for plug-and-play image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE) (2023) \ No newline at end of file diff --git a/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/images.zip b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..952e4e36115a023c0b57766f4cd875a33a082356 --- /dev/null +++ b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96d46a848ed10f6eed925ad8c0e53e1139b6c7769e11c1b1d6dd3688457f50d5 +size 704670 diff --git a/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/layout.json b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2e3f2c5784a47a51998a5ede96521993a46f8b0d --- /dev/null +++ b/2024/Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems/layout.json @@ -0,0 +1,9490 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 133, + 111, + 481, + 147 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 111, + 481, + 147 + ], + "spans": [ + { + "bbox": [ + 133, + 111, + 481, + 147 + ], + "type": "text", + "content": "Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 205, + 168, + 408, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 168, + 408, + 180 + ], + "spans": [ + { + "bbox": [ + 205, + 168, + 408, + 180 + ], + "type": "text", + "content": "Yasar Utku Alçalar and Mehmet Akçakaya" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 230, + 190, + 383, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 230, + 190, + 383, + 212 + ], + "spans": [ + { + "bbox": [ + 230, + 190, + 383, + 212 + ], + "type": "text", + "content": "University of Minnesota, Minneapolis {alcal029, akcakaya}@umn.edu" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 160, + 241, + 452, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 241, + 452, + 515 + ], + "spans": [ + { + "bbox": [ + 160, + 241, + 452, + 515 + ], + "type": "text", + "content": "Abstract. Diffusion models have emerged as powerful generative techniques for solving inverse problems. Despite their success in a variety of inverse problems in imaging, these models require many steps to converge, leading to slow inference time. Recently, there has been a trend in diffusion models for employing sophisticated noise schedules that involve more frequent iterations of timesteps at lower noise levels, thereby improving image generation and convergence speed. However, application of these ideas for solving inverse problems with diffusion models remain challenging, as these noise schedules do not perform well when using empirical tuning for the forward model log-likelihood term weights. To tackle these challenges, we propose zero-shot approximate posterior sampling (ZAPS) that leverages connections to zero-shot physics-driven deep learning. ZAPS fixes the number of sampling steps, and uses zero-shot training with a physics-guided loss function to learn log-likelihood weights at each irregular timestep. We apply ZAPS to the recently proposed diffusion posterior sampling method as baseline, though ZAPS can also be used with other posterior sampling diffusion models. We further approximate the Hessian of the logarithm of the prior using a diagonalization approach with learnable diagonal entries for computational efficiency. These parameters are optimized over a fixed number of epochs with a given computational budget. Our results for various noisy inverse problems, including Gaussian and motion deblurring, inpainting, and super-resolution show that ZAPS reduces inference time, provides robustness to irregular noise schedules and improves reconstruction quality. Code is available at https://github.com/ualcalar17/ZAPS." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "spans": [ + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "content": "Keywords: Diffusion Models " + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "content": " Zero-Shot Learning " + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "content": " Inverse Problems " + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "content": " Plug-and-Play (PnP) Methods " + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "content": " Unrolled Networks " + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 525, + 452, + 548 + ], + "type": "text", + "content": " Bayesian Methods" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 568, + 229, + 581 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 568, + 229, + 581 + ], + "spans": [ + { + "bbox": [ + 133, + 568, + 229, + 581 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 133, + 593, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 593, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 593, + 481, + 665 + ], + "type": "text", + "content": "The forefront of deep generative models is now dominated by diffusion models [16, 28, 30, 32, 34] in the intricate task of image generation [11]. Their capabilities extend across various domains, including computer vision [2], natural language processing [17] and temporal data modeling [1]. Recently, diffusion models also showed great success in solving noiseless [5, 7, 33, 34] and noisy inverse problems [6, 21, 29, 31], owing to their capability to model complicated" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 114, + 302, + 239 + ], + "blocks": [ + { + "bbox": [ + 134, + 114, + 302, + 239 + ], + "lines": [ + { + "bbox": [ + 134, + 114, + 302, + 239 + ], + "spans": [ + { + "bbox": [ + 134, + 114, + 302, + 239 + ], + "type": "image", + "image_path": "191e42ff0a9223d261c4890a46d71f7545d81a39d49b0f60235d0989fde8cef7.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 310, + 114, + 480, + 239 + ], + "blocks": [ + { + "bbox": [ + 310, + 114, + 480, + 239 + ], + "lines": [ + { + "bbox": [ + 310, + 114, + 480, + 239 + ], + "spans": [ + { + "bbox": [ + 310, + 114, + 480, + 239 + ], + "type": "image", + "image_path": "8c57d1fb38e0e1e293dca4c86c4217f235f55b044f8ab5ea97bb898d85e1410f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 133, + 240, + 302, + 358 + ], + "blocks": [ + { + "bbox": [ + 133, + 240, + 302, + 358 + ], + "lines": [ + { + "bbox": [ + 133, + 240, + 302, + 358 + ], + "spans": [ + { + "bbox": [ + 133, + 240, + 302, + 358 + ], + "type": "image", + "image_path": "62b755b46a51233b3243b509c6b05f96d0a66578505214f2a96b90101bfefcdb.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 373, + 482, + 396 + ], + "lines": [ + { + "bbox": [ + 132, + 373, + 482, + 396 + ], + "spans": [ + { + "bbox": [ + 132, + 373, + 482, + 396 + ], + "type": "text", + "content": "Fig. 1: Representative results of our algorithm for four distinct noisy inverse problems " + }, + { + "bbox": [ + 132, + 373, + 482, + 396 + ], + "type": "inline_equation", + "content": "(\\sigma = 0.05)" + }, + { + "bbox": [ + 132, + 373, + 482, + 396 + ], + "type": "text", + "content": ", showing the ground truth (GT), measurement and reconstruction." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 310, + 240, + 479, + 358 + ], + "blocks": [ + { + "bbox": [ + 310, + 240, + 479, + 358 + ], + "lines": [ + { + "bbox": [ + 310, + 240, + 479, + 358 + ], + "spans": [ + { + "bbox": [ + 310, + 240, + 479, + 358 + ], + "type": "image", + "image_path": "1056d6052d01fdcd1d6b2babeeeb7b74701abf7c789a256e5a1ea8ab184b3cdc.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 419, + 480, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 419, + 480, + 443 + ], + "spans": [ + { + "bbox": [ + 130, + 419, + 480, + 443 + ], + "type": "text", + "content": "high-dimensional distributions. Linear inverse problems utilize a known forward model given by" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 274, + 445, + 338, + 456 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 445, + 338, + 456 + ], + "spans": [ + { + "bbox": [ + 274, + 445, + 338, + 456 + ], + "type": "interline_equation", + "content": "\\mathbf {y} = \\mathbf {A} \\mathbf {x} _ {0} + \\mathbf {n},", + "image_path": "9b94131573324ea922dcc610744a7d38ac804a1ee5a31f618a97abbd2118c42c.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "spans": [ + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": "and aim to deduce the underlying signal/image " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_0\\in \\mathbb{R}^n" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": " from measurements " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "\\mathbf{y}\\in \\mathbb{R}^{m}" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "\\mathbf{n}\\in \\mathbb{R}^m" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": " is measurement noise. In practical situations, the forward operator " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "\\mathbf{A}:\\mathbb{R}^n\\to \\mathbb{R}^m" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": " is either incomplete or ill-conditioned, necessitating the use of prior information about the signal. Posterior sampling approaches use diffusion models as generative priors and incorporates information from both the data distribution and the forward physics model, allowing for sampling from the posterior distribution " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "p(\\mathbf{x}|\\mathbf{y})" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": " using the given measurement " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "\\mathbf{y}" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": " [21]. In this context, using Bayes' rule, " + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "inline_equation", + "content": "p(\\mathbf{x}|\\mathbf{y}) = \\frac{p(\\mathbf{x})p(\\mathbf{y}|\\mathbf{x})}{p(\\mathbf{y})}" + }, + { + "bbox": [ + 130, + 462, + 482, + 563 + ], + "type": "text", + "content": ", the problem-specific score is" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 203, + 571, + 481, + 585 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 571, + 481, + 585 + ], + "spans": [ + { + "bbox": [ + 203, + 571, + 481, + 585 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x} | \\mathbf {y}) = \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x}), \\tag {1}", + "image_path": "7c4a40f00b24b3affd4294218e185624ace9e4e78861ffd225b5111c74508c0d.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "inline_equation", + "content": "\\nabla_{\\mathbf{x}_t}\\log p(\\mathbf{x})" + }, + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "text", + "content": " is approximated via the learned score model " + }, + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "inline_equation", + "content": "s_\\theta (\\mathbf{x}_t,t)" + }, + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "text", + "content": ". Many of these strategies utilize a plug-and-play (PnP) approach, using a pre-trained unconditional diffusion model as a prior [4, 9, 13, 18, 24, 37], and integrate the forward model during inference to address various inverse problem tasks." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 642, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 642, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 642, + 481, + 665 + ], + "type": "text", + "content": "The complexity for these approaches arises in obtaining the latter forward model log-likelihood term in Eq. (1), which guides the diffusion to a target" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "content": "class [11, 28]. While exact calculation is intractable, several approaches have been proposed to approximate this term. Among these, RED-diff [25] employs a variational sampler that uses a combination of measurement consistency loss and score matching regularization. Another technique, DSG [43], uses a spherical Gaussian constraint for denoising steps, allowing for larger step sizes. A class of methods utilize projections onto the convex measurement subspace after the unconditional update through score model [5, 8, 34]. Although these projections improve consistency between measurements and the sample, they are noted to lead to artifacts, such as boundary effects [7]. Thus, more recent approaches aimed to approximate the log-likelihood term in Eq. (1) different ways. Noting" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 230, + 243, + 481, + 268 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 230, + 243, + 481, + 268 + ], + "spans": [ + { + "bbox": [ + 230, + 243, + 481, + 268 + ], + "type": "interline_equation", + "content": "p _ {t} (\\mathbf {y} \\mid \\mathbf {x} _ {t}) = \\int_ {\\mathbf {x} _ {0}} p \\left(\\mathbf {x} _ {0} \\mid \\mathbf {x} _ {t}\\right) p \\left(\\mathbf {y} \\mid \\mathbf {x} _ {0}\\right) d \\mathbf {x} _ {0}, \\tag {2}", + "image_path": "b059c29fc7e7229bad3e65fe96c58eef62a419b29042080401491f0e859a764b.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "spans": [ + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "type": "text", + "content": "DPS [6] uses the posterior mean " + }, + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{x}}_0 = \\hat{\\mathbf{x}}_0(\\mathbf{x}_t) \\triangleq \\mathbb{E}[\\mathbf{x}_0|\\mathbf{x}_t] = \\mathbb{E}_{\\mathbf{x}_0 \\sim p(\\mathbf{x}_0|\\mathbf{x}_t)}[\\mathbf{x}_0]" + }, + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "type": "text", + "content": ", to approximate " + }, + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "type": "inline_equation", + "content": "p(\\mathbf{y}|\\mathbf{x}_t) = \\mathbb{E}_{\\mathbf{x}_0 \\sim p(\\mathbf{x}_0|\\mathbf{x}_t)}[p(\\mathbf{y}|\\mathbf{x}_0)]" + }, + { + "bbox": [ + 131, + 275, + 481, + 301 + ], + "type": "text", + "content": " as" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 162, + 308, + 453, + 328 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 308, + 453, + 328 + ], + "spans": [ + { + "bbox": [ + 162, + 308, + 453, + 328 + ], + "type": "interline_equation", + "content": "p (\\mathbf {y} | \\mathbf {x} _ {t}) = \\mathbb {E} _ {\\mathbf {x} _ {0} \\sim p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t})} [ p (\\mathbf {y} | \\mathbf {x} _ {0}) ] \\simeq p \\Big (\\mathbf {y} | \\mathbb {E} _ {\\mathbf {x} _ {0} \\sim p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t})} [ \\mathbf {x} _ {0} ] \\Big) = p (\\mathbf {y} | \\hat {\\mathbf {x}} _ {0}).", + "image_path": "e22d7ef7fdf098c15080b2417d59adbb1eed71ca861cdf2f6e63dae619b79e06.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 333, + 480, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 333, + 480, + 357 + ], + "spans": [ + { + "bbox": [ + 131, + 333, + 480, + 357 + ], + "type": "text", + "content": "Another technique, IIGDM [31] approximates Eq. (2) as a Gaussian centered around " + }, + { + "bbox": [ + 131, + 333, + 480, + 357 + ], + "type": "inline_equation", + "content": "\\mathbf{A}\\hat{\\mathbf{x}}_0" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 197, + 363, + 481, + 388 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 363, + 481, + 388 + ], + "spans": [ + { + "bbox": [ + 197, + 363, + 481, + 388 + ], + "type": "interline_equation", + "content": "\\int_ {\\mathbf {x} _ {0}} p (\\mathbf {x} _ {0} | \\mathbf {x} _ {t}) p (\\mathbf {y} | \\mathbf {x} _ {0}) \\mathbf {d} \\mathbf {x} _ {0} \\simeq \\mathcal {N} (\\mathbf {A} \\hat {\\mathbf {x}} _ {0}, r _ {t} ^ {2} \\mathbf {A} \\mathbf {A} ^ {\\top} + \\sigma_ {y} ^ {2} \\mathbf {I}), \\tag {3}", + "image_path": "c3aab7efca1ed0554d73a0087ed24a214b3b6f28e858fb49f041e788f7129ae0.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 394, + 481, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 394, + 481, + 418 + ], + "spans": [ + { + "bbox": [ + 131, + 394, + 481, + 418 + ], + "type": "text", + "content": "and uses it for guidance. In these works, log-likelihood weights (or gradient step sizes), " + }, + { + "bbox": [ + 131, + 394, + 481, + 418 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 131, + 394, + 481, + 418 + ], + "type": "text", + "content": " are introduced to further control the reconstruction as" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 202, + 426, + 481, + 439 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 202, + 426, + 481, + 439 + ], + "spans": [ + { + "bbox": [ + 202, + 426, + 481, + 439 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x} | \\mathbf {y}) = \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\zeta_ {t} \\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x}). \\tag {4}", + "image_path": "480675726620c14d1480facbf3c7fc172e73ca769a4d0245773e2d56fc6ffdd7.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 445, + 482, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 445, + 482, + 624 + ], + "spans": [ + { + "bbox": [ + 130, + 445, + 482, + 624 + ], + "type": "text", + "content": "While DPS demonstrates high performance in various inverse problem tasks, it suffers from the drawback of requiring a large number of sampling steps, resulting in prolonged reconstruction time. IIGDM accelerates this process by adopting regular (linear) jumps approach across the schedule. However, utilizing more complicated schedules, where the jumps are irregular introduces a challenge, as it requires distinct log-likelihood weights, " + }, + { + "bbox": [ + 130, + 445, + 482, + 624 + ], + "type": "inline_equation", + "content": "\\zeta_t" + }, + { + "bbox": [ + 130, + 445, + 482, + 624 + ], + "type": "text", + "content": ", for each timestep. Heuristic adjustment of these weights is difficult and frequently leads to undesirable outcomes. In this work, by taking an inspiration from zero-shot/test-time self-supervised models [35,42] we propose to learn the log-likelihood weights for a fixed number of sampling steps and fine-tune them over a few epochs. It is crucial to note that fine-tuning DPS (or IIGDM) entails saving computational graphs for each unroll, leading to memory issues and slow backpropagation. Thus, we also propose to approximate the Hessian of the data probability using a wavelet-based diagonalization strategy [12], and learn these diagonal values for each timestep as well. Fig. 1 shows representative results for our method. Our key contributions include:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 629, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 629, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 138, + 629, + 482, + 666 + ], + "type": "text", + "content": "- We introduce zero-shot approximate posterior sampling (ZAPS), leveraging zero-shot learning for dynamic automated hyperparameter tuning in the inference phase to improve solution of noisy inverse problems via diffusion" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 448, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 448, + 103 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 448, + 103 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 147, + 116, + 482, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 116, + 482, + 187 + ], + "spans": [ + { + "bbox": [ + 147, + 116, + 482, + 187 + ], + "type": "text", + "content": "models. This method fortifies the robustness of the sampling process, attaining a state-of-the-art performance [6, 21, 31] in sampling outcomes. To the best of our knowledge, our method is the first attempt to learn the log-likelihood weights for solving inverse problems via diffusion models by using a measurement-consistent loss when the sampling noise schedule consists of irregular jumps across timesteps." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 188, + 480, + 294 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 138, + 188, + 479, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 188, + 479, + 222 + ], + "spans": [ + { + "bbox": [ + 138, + 188, + 479, + 222 + ], + "type": "text", + "content": "- We provide a well-designed approximation for the Hessian of the logarithm of the prior, enabling a computationally efficient and trainable posterior computation." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 139, + 224, + 480, + 294 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 224, + 480, + 294 + ], + "spans": [ + { + "bbox": [ + 139, + 224, + 480, + 294 + ], + "type": "text", + "content": "- We showcase the efficacy of incorporating a learnable log-likelihood weights for each diffusion step during the reverse diffusion process through both quantitative and qualitative assessments on FFHQ and ImageNet datasets. Our approach not only outperforms state-of-the-art, but it also substantially reduces the required number of sampling steps from 1000 to " + }, + { + "bbox": [ + 139, + 224, + 480, + 294 + ], + "type": "inline_equation", + "content": "\\sim 20" + }, + { + "bbox": [ + 139, + 224, + 480, + 294 + ], + "type": "text", + "content": "-to-30, facilitating convergence with fewer total neural function evaluations (NFEs)." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 132, + 312, + 242, + 325 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 312, + 242, + 325 + ], + "spans": [ + { + "bbox": [ + 132, + 312, + 242, + 325 + ], + "type": "text", + "content": "2 Related Works" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "spans": [ + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "content": "Diffusion Models. During training, diffusion models [16, 34] add Gaussian noise to an image with a fixed increasing variance schedule, e.g. linear or exponential, " + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "inline_equation", + "content": "\\beta_{1},\\beta_{2},\\dots,\\beta_{T}" + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "content": " until pure noise is obtained, and learns a reverse diffusion process, where a neural network is trained to gradually remove noise and reconstruct the original image. Let " + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_0\\sim p_{\\mathrm{data}}(x)" + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "content": " be samples from the data distribution, and " + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_{\\{1:T\\}}\\in \\mathbb{R}^d" + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "content": " be noisy latent variables. By taking " + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "inline_equation", + "content": "\\alpha_{t} = 1 - \\beta_{t}" + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "inline_equation", + "content": "\\bar{\\alpha}_{t} = \\prod_{s = 1}^{t}\\alpha_{s}" + }, + { + "bbox": [ + 131, + 337, + 479, + 423 + ], + "type": "text", + "content": ", the Markovian forward process can be written as" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 230, + 432, + 480, + 444 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 230, + 432, + 480, + 444 + ], + "spans": [ + { + "bbox": [ + 230, + 432, + 480, + 444 + ], + "type": "interline_equation", + "content": "q \\left(\\mathbf {x} _ {t} \\mid \\mathbf {x} _ {0}\\right) = \\mathcal {N} \\left(\\mathbf {x} _ {t} \\mid \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0}, (1 - \\bar {\\alpha} _ {t}) \\mathbf {I}\\right). \\tag {5}", + "image_path": "3ec719861c87eaf8bc119dd178747b6ffb107b3e50a4582138645cedd4dadff5.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 453, + 451, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 453, + 451, + 464 + ], + "spans": [ + { + "bbox": [ + 132, + 453, + 451, + 464 + ], + "type": "text", + "content": "By using the reparameterization trick and Eq. (5), " + }, + { + "bbox": [ + 132, + 453, + 451, + 464 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_t" + }, + { + "bbox": [ + 132, + 453, + 451, + 464 + ], + "type": "text", + "content": " can be sampled as" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 187, + 473, + 480, + 486 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 473, + 480, + 486 + ], + "spans": [ + { + "bbox": [ + 187, + 473, + 480, + 486 + ], + "type": "interline_equation", + "content": "\\mathbf {x} _ {t} \\left(\\mathbf {x} _ {0}, \\epsilon\\right) = \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0} + \\sqrt {1 - \\bar {\\alpha} _ {t}} \\epsilon \\quad \\text {w h e r e} \\quad \\epsilon \\sim \\mathcal {N} (\\epsilon ; 0, \\mathbf {I}). \\tag {6}", + "image_path": "f9f689303f025996966f9e57a3a7deaf666c1155448729a2f33f158f83fba60e.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 131, + 494, + 479, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 494, + 479, + 518 + ], + "spans": [ + { + "bbox": [ + 131, + 494, + 479, + 518 + ], + "type": "text", + "content": "Consequently, denoising diffusion probabilistic models (DDPMs) [16] learns the reverse process by minimizing a lower bound on the log prior via:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 225, + 526, + 480, + 540 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 526, + 480, + 540 + ], + "spans": [ + { + "bbox": [ + 225, + 526, + 480, + 540 + ], + "type": "interline_equation", + "content": "L _ {t} (\\theta) = \\mathbb {E} _ {t, \\mathbf {x} _ {0}, \\epsilon} \\| \\epsilon - \\epsilon_ {\\theta} \\left(\\mathbf {x} _ {t} \\left(\\mathbf {x} _ {0}, \\epsilon\\right), t\\right) \\| _ {2} ^ {2}. \\tag {7}", + "image_path": "894e57dcf4880c6201fd76f30a81fccbbf32bcd6cf91bfef9c26b6c716dd3722.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 131, + 548, + 479, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 548, + 479, + 572 + ], + "spans": [ + { + "bbox": [ + 131, + 548, + 479, + 572 + ], + "type": "text", + "content": "Furthermore, it can be shown that epsilon matching in Eq. (7) is analogous to the denoising score matching (DSM) [32,39] objective up to a constant:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 214, + 581, + 480, + 597 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 581, + 480, + 597 + ], + "spans": [ + { + "bbox": [ + 214, + 581, + 480, + 597 + ], + "type": "interline_equation", + "content": "\\min _ {\\theta} \\mathbb {E} _ {\\mathbf {x} _ {t}, \\mathbf {x} _ {0}, \\epsilon} \\| \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t) - \\nabla_ {\\mathbf {x} _ {t}} \\log q (\\mathbf {x} _ {t} | \\mathbf {x} _ {0}) \\| _ {2} ^ {2}, \\tag {8}", + "image_path": "b1e772dba9bda8f68613f892870dd271e90c75e7beff06338d31a081b5459da9.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "spans": [ + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "type": "text", + "content": "in which " + }, + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "type": "inline_equation", + "content": "\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t) = -\\frac{\\epsilon_{\\theta}(\\mathbf{x}_t,t)}{\\sqrt{1 - \\bar{\\alpha}_t}}" + }, + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "type": "text", + "content": ". Using Tweedie's formula and Eq. (6), posterior mean for " + }, + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "type": "inline_equation", + "content": "p(\\mathbf{x}_0|\\mathbf{x}_t)" + }, + { + "bbox": [ + 131, + 608, + 479, + 635 + ], + "type": "text", + "content": " can be found as:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 231, + 644, + 479, + 667 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 231, + 644, + 479, + 667 + ], + "spans": [ + { + "bbox": [ + 231, + 644, + 479, + 667 + ], + "type": "interline_equation", + "content": "\\hat {\\mathbf {x}} _ {0} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {x} _ {t} + (1 - \\bar {\\alpha} _ {t}) \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t)\\right). \\tag {9}", + "image_path": "33fc9dfef6ca835838fc88c5a4c64853ce52bced502a9029eca65a004c893bda.jpg" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "spans": [ + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "type": "text", + "content": "Sampling " + }, + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_{t + 1}" + }, + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "type": "inline_equation", + "content": "p(\\mathbf{x}_{t + 1}|\\mathbf{x}_t)" + }, + { + "bbox": [ + 131, + 116, + 479, + 140 + ], + "type": "text", + "content": " can be done using ancestral sampling by iteratively computing:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 205, + 148, + 481, + 173 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 148, + 481, + 173 + ], + "spans": [ + { + "bbox": [ + 205, + 148, + 481, + 173 + ], + "type": "interline_equation", + "content": "\\mathbf {x} _ {t - 1} = \\frac {1}{\\sqrt {\\alpha_ {t}}} \\left(\\mathbf {x} _ {t - 1} - \\frac {1 - \\alpha_ {t}}{\\sqrt {1 - \\bar {\\alpha} _ {t}}} \\boldsymbol {\\epsilon} _ {\\theta} (\\mathbf {x} _ {t}, t)\\right) + \\sigma_ {t} \\mathbf {z}, \\tag {10}", + "image_path": "13ed6d86795de75ef5675ff3cf1759fc1ec423a7b1fcd8904dbdeb4f1b8f2fb8.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "spans": [ + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "type": "inline_equation", + "content": "\\mathbf{z} \\sim \\mathcal{N}(0, \\mathbf{I})" + }, + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "type": "inline_equation", + "content": "\\sigma_t^2 = \\tilde{\\beta}_t = \\frac{1 - \\bar{\\alpha}_{t-1}}{1 - \\bar{\\alpha}_t} \\beta_t" + }, + { + "bbox": [ + 131, + 180, + 479, + 218 + ], + "type": "text", + "content": ". It is also worth noting that the DDPM is equivalent to the variance preserving stochastic differential equations (VP-SDEs) [34]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 233, + 480, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 233, + 480, + 329 + ], + "spans": [ + { + "bbox": [ + 131, + 233, + 480, + 329 + ], + "type": "text", + "content": "Solving Inverse Problems via Diffusion Models. When solving inverse problems via diffusion models, the main challenge is to find an approximation to the log-likelihood term, " + }, + { + "bbox": [ + 131, + 233, + 480, + 329 + ], + "type": "inline_equation", + "content": "\\nabla_{\\mathbf{x}_t}\\log p(\\mathbf{y}|\\mathbf{x})" + }, + { + "bbox": [ + 131, + 233, + 480, + 329 + ], + "type": "text", + "content": ", as discussed earlier. One recent method, denoising diffusion restoration models (DDRM) [21], utilizes a spectral domain approach, allowing the incorporation of noise from the measurement domain into the spectral domain through singular value decomposition (SVD). However, the application of SVD is computationally expensive [6]. Manifold Constrained Gradient (MCG) [7] method applies projections after the MCG correction as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 158, + 335, + 480, + 351 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 158, + 335, + 480, + 351 + ], + "spans": [ + { + "bbox": [ + 158, + 335, + 480, + 351 + ], + "type": "interline_equation", + "content": "\\mathbf {x} _ {t - 1} ^ {\\prime} = f (\\mathbf {x} _ {t}, \\mathbf {s} _ {\\theta}) - \\zeta \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {K} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\| _ {2} ^ {2} + g (\\mathbf {x} _ {t}) \\mathbf {z}, \\quad \\mathbf {z} \\sim \\mathcal {N} (0, \\mathbf {I}), \\tag {11}", + "image_path": "c96efcf0bd946b5dc26920bad70d463ff616d01232250ea3b1218ae061a61750.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 157, + 353, + 480, + 365 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 157, + 353, + 480, + 365 + ], + "spans": [ + { + "bbox": [ + 157, + 353, + 480, + 365 + ], + "type": "interline_equation", + "content": "\\mathbf {x} _ {t - 1} = \\mathbf {H} \\mathbf {x} _ {t - 1} + \\mathbf {b}, \\tag {12}", + "image_path": "3714aab1039dc0fe2b3c4ed6ce091eb6ce04bb5e417b4d8ad476110d55133a95.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "spans": [ + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "type": "inline_equation", + "content": "\\zeta" + }, + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "type": "inline_equation", + "content": "\\mathbf{H}" + }, + { + "bbox": [ + 131, + 372, + 479, + 408 + ], + "type": "text", + "content": " are dependent on noise covariance. MCG update of Eq. (11) projects estimates onto the measurement subspace, thus they may fall off from the data manifold [6]. Hence, DPS proposes to update without projections as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 235, + 416, + 480, + 430 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 235, + 416, + 480, + 430 + ], + "spans": [ + { + "bbox": [ + 235, + 416, + 480, + 430 + ], + "type": "interline_equation", + "content": "\\mathbf {x} _ {t - 1} = \\mathbf {x} _ {t - 1} ^ {\\prime} - \\zeta_ {t} \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0} \\| _ {2} ^ {2}, \\tag {13}", + "image_path": "5dcbce5104ee3e0be7751c35d2d48d648fa3b33ae7acc871856b5c6c963ceaf1.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 131, + 437, + 479, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 437, + 479, + 460 + ], + "spans": [ + { + "bbox": [ + 131, + 437, + 479, + 460 + ], + "type": "text", + "content": "Note Eq. (13) is equivalent to Eq. (11) when " + }, + { + "bbox": [ + 131, + 437, + 479, + 460 + ], + "type": "inline_equation", + "content": "\\mathbf{K} = \\mathbf{I}" + }, + { + "bbox": [ + 131, + 437, + 479, + 460 + ], + "type": "text", + "content": ", and it reduces to the following when the forward operator is linear:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 230, + 467, + 480, + 491 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 230, + 467, + 480, + 491 + ], + "spans": [ + { + "bbox": [ + 230, + 467, + 480, + 491 + ], + "type": "interline_equation", + "content": "\\mathbf {x} _ {t - 1} = \\mathbf {x} _ {t - 1} ^ {\\prime} + \\zeta_ {t} \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\tag {14}", + "image_path": "0a507c732c04e980319a5ef8e303d0fdf9a85a9615e0e82d2d699f94159d1bd5.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 131, + 497, + 479, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 497, + 479, + 521 + ], + "spans": [ + { + "bbox": [ + 131, + 497, + 479, + 521 + ], + "type": "text", + "content": "IIGDM [31], on the other hand, utilizes a Gaussian centered around " + }, + { + "bbox": [ + 131, + 497, + 479, + 521 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{x}}_0" + }, + { + "bbox": [ + 131, + 497, + 479, + 521 + ], + "type": "text", + "content": " that is defined in Eq. (9) to obtain the following score approximation:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 186, + 529, + 480, + 552 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 186, + 529, + 480, + 552 + ], + "spans": [ + { + "bbox": [ + 186, + 529, + 480, + 552 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} \\left(r _ {t} ^ {2} \\mathbf {A} \\mathbf {A} ^ {\\top} + \\sigma_ {y} ^ {2} \\mathbf {I}\\right) ^ {- 1} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}). \\tag {15}", + "image_path": "43f19b61ffc11e3195bbdcf366c3ae48e0b5232ba802dc5890b87b57dbe03ea4.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 131, + 559, + 472, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 559, + 472, + 572 + ], + "spans": [ + { + "bbox": [ + 131, + 559, + 472, + 572 + ], + "type": "text", + "content": "In cases where there is no measurement noise " + }, + { + "bbox": [ + 131, + 559, + 472, + 572 + ], + "type": "inline_equation", + "content": "(\\sigma_y = 0)" + }, + { + "bbox": [ + 131, + 559, + 472, + 572 + ], + "type": "text", + "content": ", Eq. (15) simplifies to:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 220, + 578, + 480, + 602 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 220, + 578, + 480, + 602 + ], + "spans": [ + { + "bbox": [ + 220, + 578, + 480, + 602 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq r _ {t} ^ {- 2} \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\dagger} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}) \\tag {16}", + "image_path": "61cf01ce47ff9659062bbc6d38e9e56ddbea3548c46b0151ac0941f1ac87ec07.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "spans": [ + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^{\\dagger}" + }, + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "type": "text", + "content": " denotes the Moore-Penrose pseudoinverse of " + }, + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "type": "inline_equation", + "content": "\\mathbf{A}" + }, + { + "bbox": [ + 131, + 609, + 479, + 634 + ], + "type": "text", + "content": ". We note that using Woodbury matrix identity (derived in SuppMat), one can simplify Eq. (15) to:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 146, + 641, + 479, + 667 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 641, + 479, + 667 + ], + "spans": [ + { + "bbox": [ + 146, + 641, + 479, + 667 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p _ {t} (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\left(\\mathbf {A} ^ {\\top} \\mathbf {A} + \\eta \\mathbf {I}\\right) ^ {- 1} \\mathbf {A} ^ {\\top} \\left(\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}\\right), \\quad \\text {w h e r e} \\eta = \\frac {\\sigma_ {y} ^ {2}}{r _ {t} ^ {2}}. \\tag {17}", + "image_path": "5a7b34366d82ec54ccd16d5ad0c1aa82bb85ec565b8673c715bf1ea161d4af63.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "type": "text", + "content": "From Eq. (17), the similarity between DPS and IIGDM updates can be seen, with " + }, + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "type": "inline_equation", + "content": "(\\mathbf{A}^{\\top}\\mathbf{A} + \\eta \\mathbf{I})^{-1}" + }, + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "type": "text", + "content": " term being the difference. Note the DPS update in Eq. (13) works with non-linear operators, while IIGDM's update does not rely on the differentiability of the forward operator, as long as a pseudo-inverse-like operation can be derived." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 194, + 479, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 194, + 479, + 277 + ], + "spans": [ + { + "bbox": [ + 130, + 194, + 479, + 277 + ], + "type": "text", + "content": "Improved Irregular Noise Schedules for Image Generation. Diffusion models typically utilize well-defined fixed noise schedules, with examples including linear or exponential ones. Lately, more sophisticated methods have been developed that sweep across these schedules and take samples in irregular timesteps [11,19] for unconditional image generation. The idea behind this strategy hinges on more frequent sampling for lower noise levels, making it possible to use considerably less number of sampling steps." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 278, + 482, + 482 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 278, + 482, + 482 + ], + "spans": [ + { + "bbox": [ + 130, + 278, + 482, + 482 + ], + "type": "text", + "content": "Most of the aforementioned studies that solve inverse problems via diffusion models used the same number of steps that the unconditional diffusion model was trained for [6,7,34]. Nonetheless, there has been a notable trend favoring shorter schedules characterized by linear jumps for inverse problems, where the log-likelihood weights were hand-tuned by trial-and-error [25,31] when using reduced number of steps. While these approaches have proven effective, they still require a large number of sampling steps or heuristic tuning of the log-likelihood weights, " + }, + { + "bbox": [ + 130, + 278, + 482, + 482 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 130, + 278, + 482, + 482 + ], + "type": "text", + "content": " in Eq. (4) to achieve good performance. The former issue leads to lengthy and potentially impractical computational times, while the latter issue results in generalizability difficulties for adoption at different measurement noise levels and variations in the measurement operators. Furthermore, the irregular jump strategy that has been powerful for image generation has not garnered significant attention for inverse problems, mainly due to the impracticality of empirically tuning the log-likelihood weights. Thus, a method that automatically selects and adjusts log-likelihood weights based on the provided measurements for arbitrary noise schedules, instead of requiring manual tuning, holds significant potential for improving robustness and image quality." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 500, + 233, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 500, + 233, + 514 + ], + "spans": [ + { + "bbox": [ + 132, + 500, + 233, + 514 + ], + "type": "text", + "content": "3 Methodology" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 525, + 409, + 538 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 525, + 409, + 538 + ], + "spans": [ + { + "bbox": [ + 131, + 525, + 409, + 538 + ], + "type": "text", + "content": "3.1 Zero-shot Fine Tuning of Log-Likelihood Weights" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 546, + 482, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 546, + 482, + 631 + ], + "spans": [ + { + "bbox": [ + 130, + 546, + 482, + 631 + ], + "type": "text", + "content": "In this work, we propose a robust automated approach for setting the log-likelihood weights at each timestep for arbitrary noise sampling schedules to improve posterior sampling with the given measurements during inference. This allows for a stable reconstruction for different sweeps across noise schedules. Furthermore, the weights themselves are image-specific, which improves the performance compared to the former approaches. For estimating the likelihood in Eq. (1), we use the update in DPS [6]:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 181, + 639, + 481, + 663 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 639, + 481, + 663 + ], + "spans": [ + { + "bbox": [ + 181, + 639, + 481, + 663 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {y} | \\mathbf {x} _ {t}) \\simeq \\nabla_ {\\mathbf {x} _ {t}} \\| \\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0} \\| _ {2} ^ {2} = - \\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}), \\tag {18}", + "image_path": "dcba5a93e2570141c827cc99accaed2dba5a51e4fef6bec2ec777f17165240f3.jpg" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 143, + 113, + 473, + 277 + ], + "blocks": [ + { + "bbox": [ + 143, + 113, + 473, + 277 + ], + "lines": [ + { + "bbox": [ + 143, + 113, + 473, + 277 + ], + "spans": [ + { + "bbox": [ + 143, + 113, + 473, + 277 + ], + "type": "image", + "image_path": "242ba34d8be7399d5f13e12aca23330871721cfa86e1c2fb615b139b45b810be.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "lines": [ + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "spans": [ + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "text", + "content": "Fig. 2: Our zero-shot approximate posterior sampling (ZAPS) approach unrolls the sampling process for a fixed number of " + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "text", + "content": " steps for arbitrary/irregular noise schedules, alternating between score model sampling (SMS) and likelihood guidance (LG). Our zero-shot fine-tuning approach has two key components: 1) The Hessian of the log prior is approximated using a discrete wavelet transform diagonalization technique, 2) Both the diagonal matrices, " + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{D}_t\\}" + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "text", + "content": " and the log-likelihood weights, " + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 131, + 284, + 482, + 373 + ], + "type": "text", + "content": " are updated during fine-tuning. The fine-tuning is done for a fixed number of epochs with a given NFE budget, yielding a faster and more robust adaptive inverse problem solver." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 131, + 396, + 482, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 396, + 482, + 444 + ], + "spans": [ + { + "bbox": [ + 131, + 396, + 482, + 444 + ], + "type": "text", + "content": "although as noted before, the IIGDM [31] update in Eq. (17) is also similar. Thus we emphasize that while we chose DPS as baseline for its versatility in inverse problems, our ZAPS strategy is applicable to other diffusion models for inverse problems. Recalling the definition of " + }, + { + "bbox": [ + 131, + 396, + 482, + 444 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{x}}_0" + }, + { + "bbox": [ + 131, + 396, + 482, + 444 + ], + "type": "text", + "content": " in Eq. (9), we note" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 220, + 453, + 481, + 479 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 220, + 453, + 481, + 479 + ], + "spans": [ + { + "bbox": [ + 220, + 453, + 481, + 479 + ], + "type": "interline_equation", + "content": "\\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + (1 - \\bar {\\alpha} _ {t}) \\frac {\\partial \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t} , t)}{\\partial \\mathbf {x} _ {t}}\\right). \\tag {19}", + "image_path": "54e519636be206c199c2f81eb811f7494c024d510a7705a86757b1e1c9b86768.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "spans": [ + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "type": "text", + "content": "Thus, ignoring the calculation and storage of the matrix " + }, + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "type": "inline_equation", + "content": "\\frac{\\partial\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)}{\\partial\\mathbf{x}_t}" + }, + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "type": "text", + "content": " for now, one needs to fine tune the log-likelihood weights " + }, + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 131, + 488, + 482, + 515 + ], + "type": "text", + "content": " in" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 170, + 522, + 481, + 550 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 170, + 522, + 481, + 550 + ], + "spans": [ + { + "bbox": [ + 170, + 522, + 481, + 550 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\mathbf {x} _ {t}} \\log p (\\mathbf {x}) + \\zeta_ {t} \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + (1 - \\bar {\\alpha} _ {t}) \\frac {\\partial \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t} , t)}{\\partial \\mathbf {x} _ {t}}\\right) \\mathbf {A} ^ {\\top} (\\mathbf {y} - \\mathbf {A} \\hat {\\mathbf {x}} _ {0}). \\qquad (2 0)", + "image_path": "f5a781d0a5ddc51d978eea6791a9943268a75a1928c9e833943f86dafeb589a2.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "text", + "content": "This is done based on the concept of algorithm unrolling [14, 15, 22] in physics-driven deep learning by fixing the number of sampling steps " + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "text", + "content": ". Then the whole posterior sampling process is described as alternating between DDPM sampling using the pre-trained unconditional score model, followed by the log-likelihood term guidance in Eq. (20) for " + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "text", + "content": " steps. This \"unrolled\" network is fine-tuned end-to-end, where the only updates are made to " + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "text", + "content": " and no fine-tuning is performed on the unconditional score function, " + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)" + }, + { + "bbox": [ + 131, + 557, + 482, + 666 + ], + "type": "text", + "content": ". This also alleviates the need for backpropagation across the score function network, leading to further savings in computational time. The fine-tuning is performed using a physics-inspired loss" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 132, + 130, + 480, + 327 + ], + "blocks": [ + { + "bbox": [ + 133, + 115, + 422, + 129 + ], + "lines": [ + { + "bbox": [ + 133, + 115, + 422, + 129 + ], + "spans": [ + { + "bbox": [ + 133, + 115, + 422, + 129 + ], + "type": "text", + "content": "Algorithm 1 ZAPS: Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "lines": [ + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "spans": [ + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": "Require: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "T,\\mathbf{y},\\{\\tilde{\\sigma}_i\\}_{i = 1}^T" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " orthogonal DWT (W) \n1: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_T\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " \n2: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\tau \\subset [1,\\dots,T]" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " extending over a length of " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "S < T" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " \n3: for epoch in range(epochs) do \n4: for " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "i = S,\\ldots ,1" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " do \n5: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{s}}\\gets \\mathbf{s}_{\\theta}(\\mathbf{x}_{\\tau_i},\\tau_i)" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " ▷ Score computation \n6: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\hat{\\mathbf{x}}_0\\leftarrow \\frac{1}{\\sqrt{\\bar{\\alpha}_{\\tau_i}}} (\\mathbf{x}_{\\tau_i} + (1 - \\bar{\\alpha}_{\\tau_i})\\hat{\\mathbf{s}})" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " Tweedie denoising \n7: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\mathbf{z}\\sim \\mathcal{N}(\\mathbf{0},\\mathbf{I})" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " if " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\tau_{i} > 1" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " , else " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\mathbf{z} = \\mathbf{0}" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " \n8: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_{\\tau_i - 1}'\\gets \\frac{\\sqrt{\\alpha_{\\tau_i}}(1 - \\bar{\\alpha}_{\\tau_i - 1})}{1 - \\bar{\\alpha}_{\\tau_i}}\\mathbf{x}_{\\tau_i} + \\frac{\\sqrt{\\bar{\\alpha}_{\\tau_i - 1}}\\beta_{\\tau_i}}{1 - \\bar{\\alpha}_{\\tau_i}}\\hat{\\mathbf{x}}_0 + \\tilde{\\sigma}_{\\tau_i}\\mathbf{z}" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " \n9: " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_{\\tau_{i - 1}}\\gets \\mathbf{x}_{\\tau_{i - 1}}' + \\zeta_{\\tau_i}\\left(\\left(\\frac{1}{\\sqrt{\\bar{\\alpha}_{\\tau_i}}}\\Bigl {(}\\mathbf{I} + (1 - \\bar{\\alpha}_{\\tau_i})\\mathbf{WD}_{\\tau_i}\\mathbf{W}^\\top \\Bigr)\\right)\\cdot \\mathbf{A}^\\top (\\mathbf{y} - \\mathbf{A}\\hat{\\mathbf{x}}_0)\\right)" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " \n10: end for \n11: Update network parameters " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{D}_t\\}" + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "text", + "content": " \n12: end for \n13: return " + }, + { + "bbox": [ + 132, + 130, + 480, + 327 + ], + "type": "inline_equation", + "content": "{\\bf x}_0" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "code_body" + } + ], + "index": 3, + "sub_type": "algorithm" + }, + { + "bbox": [ + 130, + 350, + 482, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 350, + 482, + 387 + ], + "spans": [ + { + "bbox": [ + 130, + 350, + 482, + 387 + ], + "type": "text", + "content": "function that evaluates the consistency of the final estimate and the measurements: " + }, + { + "bbox": [ + 130, + 350, + 482, + 387 + ], + "type": "inline_equation", + "content": "\\mathcal{L}(\\mathbf{y},\\mathbf{x}_0) = ||\\mathbf{y} - \\mathbf{A}\\mathbf{x}_0||_2^2" + }, + { + "bbox": [ + 130, + 350, + 482, + 387 + ], + "type": "text", + "content": ". High-level explanation for our algorithm is given in Fig. 2." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 405, + 402, + 418 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 405, + 402, + 418 + ], + "spans": [ + { + "bbox": [ + 131, + 405, + 402, + 418 + ], + "type": "text", + "content": "3.2 Approximation for the Hessian of the Log Prior" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "spans": [ + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "text", + "content": "Implementing the zero-shot update for Eq. (20) poses various challenges, since backpropagation through the unrolled network to update all " + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "inline_equation", + "content": "\\{\\zeta_t\\}" + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "text", + "content": " requires another backpropagation through the Jacobian of the score function at each time step. This can only be done by retaining the computational graphs that are created when calculating the Jacobian term in Eq. (20), which quickly explodes memory requirements, especially when the number of sampling steps increases. Also, backpropagating through multiple graphs at the end to only update the log-likelihood weights is time-inefficient and causes prolonged sampling times. Hence, we propose to approximate the Jacobian using inspirations from wavelet-based signal processing techniques and propose to learn this approximation to improve the overall outcome. Noting that " + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "inline_equation", + "content": "\\mathbf{s}_{\\theta}(\\mathbf{x}_t,t)" + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "text", + "content": " in Eq. (19) is an approximation of the log-gradient of the true prior " + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "inline_equation", + "content": "p(\\mathbf{x})" + }, + { + "bbox": [ + 130, + 426, + 482, + 571 + ], + "type": "text", + "content": ", we have" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 214, + 578, + 481, + 607 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 578, + 481, + 607 + ], + "spans": [ + { + "bbox": [ + 214, + 578, + 481, + 607 + ], + "type": "interline_equation", + "content": "\\frac {\\partial \\hat {\\mathbf {x}} _ {0}}{\\partial \\mathbf {x} _ {t}} = \\frac {1}{\\sqrt {\\bar {\\alpha} _ {t}}} \\left(\\mathbf {I} + \\left(1 - \\bar {\\alpha} _ {t}\\right) \\frac {\\partial^ {2} \\log p _ {t} (\\mathbf {x} _ {t})}{\\partial \\mathbf {x} _ {t} ^ {2}}\\right). \\tag {21}", + "image_path": "9282ffdb2fa877eda35f801ba981d978ae29ea4ba861962dde6d298f988c09c6.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 613, + 482, + 667 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 613, + 482, + 667 + ], + "spans": [ + { + "bbox": [ + 130, + 613, + 482, + 667 + ], + "type": "text", + "content": "In order to make a backpropagation to update these weights, one needs to calculate the Hessian matrix, " + }, + { + "bbox": [ + 130, + 613, + 482, + 667 + ], + "type": "inline_equation", + "content": "\\frac{\\partial^2\\log p_t(\\mathbf{x}_t)}{\\partial\\mathbf{x}_t^2}" + }, + { + "bbox": [ + 130, + 613, + 482, + 667 + ], + "type": "text", + "content": " given in Eq. (21). This matrix is the negative of the observed Fisher information matrix, whose expected value is the Fisher information matrix. It is also known that in the limit, it approximates" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 296, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 265 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 265 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 265 + ], + "type": "text", + "content": "the inverse covariance matrix of the maximum likelihood estimator. Furthermore, under mild assumptions about continuity of the prior, the observed Fisher information matrix is symmetric. Thus, an appropriate decorrelating unitary matrix can be used to diagonalize it. While finding the desired unitary matrix is equally time-consuming as calculating this Hessian, several pre-determined unitary transforms have been proposed for decorrelation in the signal processing community for different applications [12, 27, 36]. Of particular note is the use of unitary wavelet transforms for Wiener filtering [12], where these transforms were utilized for their tendency to decorrelate data, i.e. approximate the Karhunen-Loeve transform [27]. In this work, we also use these decorrelating properties to approximately diagonalize the Hessian of the log prior, " + }, + { + "bbox": [ + 130, + 116, + 482, + 265 + ], + "type": "inline_equation", + "content": "\\frac{\\partial^2\\log p_t(\\mathbf{x}_t)}{\\partial\\mathbf{x}_t^2}" + }, + { + "bbox": [ + 130, + 116, + 482, + 265 + ], + "type": "text", + "content": " using fixed orthogonal discrete wavelet transforms (DWT):" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 250, + 274, + 481, + 301 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 274, + 481, + 301 + ], + "spans": [ + { + "bbox": [ + 250, + 274, + 481, + 301 + ], + "type": "interline_equation", + "content": "\\frac {\\partial^ {2} \\log p _ {t} (\\mathbf {x} _ {t})}{\\partial \\mathbf {x} _ {t} ^ {2}} \\simeq \\mathbf {W D} _ {t} \\mathbf {W} ^ {\\top}, \\tag {22}", + "image_path": "321643e31f487d19839ad3b636f360760d4e040f4ac9d5f91183b8324cefa337.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "spans": [ + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "type": "inline_equation", + "content": "\\mathbf{W}" + }, + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "type": "text", + "content": " is an orthogonal DWT. By making this approximation, backpropagation through the score model can also be avoided, and only the diagonal values in distinct " + }, + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{D}_t\\}" + }, + { + "bbox": [ + 130, + 308, + 482, + 357 + ], + "type": "text", + "content": " matrices needs to be learned. Our final algorithm to sample from pure noise with fine-tuning is given in Algorithm 1." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 376, + 218, + 388 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 376, + 218, + 388 + ], + "spans": [ + { + "bbox": [ + 131, + 376, + 218, + 388 + ], + "type": "text", + "content": "4 Evaluation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 402, + 406, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 402, + 406, + 415 + ], + "spans": [ + { + "bbox": [ + 131, + 402, + 406, + 415 + ], + "type": "text", + "content": "4.1 Experimental Setup and Implementation Details" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "spans": [ + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "type": "text", + "content": "We comprehensively evaluated our method, examining its performance through both qualitative and quantitative analyses using FFHQ [20] and ImageNet [10] datasets with size " + }, + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "type": "inline_equation", + "content": "256 \\times 256 \\times 3" + }, + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "type": "text", + "content": ". Pre-trained unconditional diffusion models trained on FFHQ and ImageNet were taken from [5] and [11] respectively, and used without retraining. For our experiments, we sampled 1000 images from FFHQ and ImageNet validation sets. All images underwent pre-processing to be normalized in the range [0, 1]. During all the evaluations, a Gaussian measurement noise with " + }, + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\sigma = 0.05" + }, + { + "bbox": [ + 130, + 422, + 482, + 602 + ], + "type": "text", + "content": " was used. For the orthogonal DWT, Daubechies 4 wavelet was utilized. For our quantitative evaluations, we employed 30 sampling steps with a schedule of \"15,10,5\", and 10 epochs for fine-tuning, resulting in a total of 300 NFEs. As noted in [11], superior schedules may exist but it requires substantial computational time to try out all possible schedules. Thus, we opted a schedule that is simple, and samples more frequently at the lower noise levels [11]. More details about the network architectures and hyperparameter choices are given in SuppMat." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 620, + 369, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 620, + 369, + 633 + ], + "spans": [ + { + "bbox": [ + 131, + 620, + 369, + 633 + ], + "type": "text", + "content": "4.2 Experiments on Linear Inverse Problems" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": "Problem Setup. We focused on the following linear inverse problems: (1) Gaussian deblurring, (2) inpainting, (3) motion deblurring, (4) super-resolution. For" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 448, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 448, + 103 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 448, + 103 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 114, + 323, + 273 + ], + "blocks": [ + { + "bbox": [ + 133, + 114, + 323, + 273 + ], + "lines": [ + { + "bbox": [ + 133, + 114, + 323, + 273 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 323, + 273 + ], + "type": "image", + "image_path": "68ac876e9b20b87d5143cb34697d514dd24508f841e050a6921908eb902ce19e.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 277, + 482, + 312 + ], + "lines": [ + { + "bbox": [ + 130, + 277, + 482, + 312 + ], + "spans": [ + { + "bbox": [ + 130, + 277, + 482, + 312 + ], + "type": "text", + "content": "Fig. 3: Representative images using various methods for solving Gaussian deblurring, motion deblurring and super-resolution " + }, + { + "bbox": [ + 130, + 277, + 482, + 312 + ], + "type": "inline_equation", + "content": "(\\times 4)" + }, + { + "bbox": [ + 130, + 277, + 482, + 312 + ], + "type": "text", + "content": " tasks. Proposed method qualitatively improves upon each method, including the baseline state-of-the-art DPS." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 325, + 114, + 480, + 274 + ], + "blocks": [ + { + "bbox": [ + 325, + 114, + 480, + 274 + ], + "lines": [ + { + "bbox": [ + 325, + 114, + 480, + 274 + ], + "spans": [ + { + "bbox": [ + 325, + 114, + 480, + 274 + ], + "type": "image", + "image_path": "b8884b8c36364b51787cff387317de22dfd9fb090561818691373d982391b917.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "spans": [ + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "content": "Gaussian deblurring, we considered a kernel of size " + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "inline_equation", + "content": "61 \\times 61" + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "content": " with a standard deviation " + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "inline_equation", + "content": "\\sigma = 3.0" + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "content": ". For inpainting, we considered two different scenarios wherein we randomly masked out " + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "content": " and a " + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "inline_equation", + "content": "128 \\times 128" + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "content": " box region of the image, applied uniformly across all three channels. For motion blur, we generated the blur kernel via the code1, with " + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "inline_equation", + "content": "61 \\times 61" + }, + { + "bbox": [ + 130, + 335, + 482, + 420 + ], + "type": "text", + "content": " kernel size and 0.5 intensity, as in [6]. Finally, for super-resolution, we considered bicubic downsampling. All measurements are obtained through applying the forward model to the ground truth image." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 431, + 482, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 431, + 482, + 528 + ], + "spans": [ + { + "bbox": [ + 130, + 431, + 482, + 528 + ], + "type": "text", + "content": "Comparison Methods. We compared our method with score-SDE [5, 8, 34], manifold constrained gradients (MCG) [7], denoising diffusion restoration models (DDRM) [21], diffusion posterior sampling (DPS) [6] and pseudo-inverse guided diffusion models (IIGDM) [31]. We note that our implementation of score-SDE follows the same strategy as presented in [6]. We referred to the methods that iteratively applied projections onto convex sets (POCS) as score-SDE. Additional comparisons to DDNM [40] and DiffPIR [44] are also provided in SuppMat. All methods were implemented using their respective public repositories." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 539, + 482, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 539, + 482, + 622 + ], + "spans": [ + { + "bbox": [ + 130, + 539, + 482, + 622 + ], + "type": "text", + "content": "Quantitative and Qualitative Results. We evaluated our method quantitatively using learned perceptual image patch similarity (LPIPS) distance, structural similarity index (SSIM), and peak signal-to-noise-ratio (PSNR). Representative results in Fig. 3 show that DDRM yields blurry results in Gaussian deblurring task. DPS improves sharpness across these distinct inverse problem tasks, while ZAPS yields comparable sharpness while exhibiting a higher similarity to the ground truth, all within a third of the total NFEs." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 623, + 482, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 623, + 482, + 647 + ], + "spans": [ + { + "bbox": [ + 130, + 623, + 482, + 647 + ], + "type": "text", + "content": "Representative inpainting results in Fig. 4 show that ZAPS substantially improves upon DDRM, a method that uses a slightly lower 20 timesteps, and" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 133, + 652, + 347, + 666 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 652, + 347, + 666 + ], + "spans": [ + { + "bbox": [ + 133, + 652, + 347, + 666 + ], + "type": "text", + "content": "1 https://github.com/LeviBorodenko/motionblur" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 114, + 479, + 265 + ], + "blocks": [ + { + "bbox": [ + 133, + 114, + 479, + 265 + ], + "lines": [ + { + "bbox": [ + 133, + 114, + 479, + 265 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 479, + 265 + ], + "type": "image", + "image_path": "2835cfcdf4396fed5657ea1133a42e1902697ccda5b423764b6d789429bc3f45.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 274, + 482, + 308 + ], + "lines": [ + { + "bbox": [ + 132, + 274, + 482, + 308 + ], + "spans": [ + { + "bbox": [ + 132, + 274, + 482, + 308 + ], + "type": "text", + "content": "Fig. 4: Illustrative images using state-of-the-art methods for random (70%) and box " + }, + { + "bbox": [ + 132, + 274, + 482, + 308 + ], + "type": "inline_equation", + "content": "(128 \\times 128)" + }, + { + "bbox": [ + 132, + 274, + 482, + 308 + ], + "type": "text", + "content": " inpainting. Proposed method improves upon DDRM, while achieving similar performance to IIGDM and DPS, with subtle improvements shown in zoomed insets." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "spans": [ + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "type": "text", + "content": "achieves better similarity to the ground truth and sharpness compared to DPS, which uses almost " + }, + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "type": "inline_equation", + "content": "33 \\times" + }, + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "type": "text", + "content": " more steps. Similarly, when compared with IIIGDM, it is evident that our method gives comparable results even though " + }, + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "type": "inline_equation", + "content": "3 - 4 \\times" + }, + { + "bbox": [ + 130, + 334, + 479, + 406 + ], + "type": "text", + "content": " fewer number of steps are used. The zoomed insets highlight subtle improvements afforded by our method compared to state-of-the-art DPS and IIIGDM, as seen around the eyes." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 407, + 480, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 407, + 480, + 491 + ], + "spans": [ + { + "bbox": [ + 130, + 407, + 480, + 491 + ], + "type": "text", + "content": "Tab. 1 and Tab. 2 show the three quantitative metrics for all methods, while Tab. 3 illustrates their computational complexity. ZAPS outperforms Score-SDE, MCG, and our baseline state-of-the-art comparison, DPS, in computational complexity and quantitative performance, yielding faster and improved reconstructions. Although DDRM and IIGDM surpass ZAPS in terms of computational complexity, ZAPS outperforms both methods quantitatively in terms of all three metrics. Furthermore, IIGDM could not be implemented reliably for several lin" + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 132, + 552, + 481, + 664 + ], + "blocks": [ + { + "bbox": [ + 132, + 514, + 482, + 548 + ], + "lines": [ + { + "bbox": [ + 132, + 514, + 482, + 548 + ], + "spans": [ + { + "bbox": [ + 132, + 514, + 482, + 548 + ], + "type": "text", + "content": "Table 1: Quantitative results for Gaussian deblurring and random inpainting (70%) on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 132, + 552, + 481, + 664 + ], + "lines": [ + { + "bbox": [ + 132, + 552, + 481, + 664 + ], + "spans": [ + { + "bbox": [ + 132, + 552, + 481, + 664 + ], + "type": "table", + "html": "
MethodGaussian DeblurringRandom Inpainting
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1280.71825.200.1040.81128.03
MCG [7]0.5580.50915.120.1450.75425.33
IIGDM [31]---0.0860.84226.62
DDRM [21]0.1830.70224.420.1980.74125.17
Score-SDE [5,8,34]0.5710.49615.170.2240.71824.44
ZAPS (Ours)0.1210.75726.060.0780.81327.79
", + "image_path": "f31af8d19dc5db6520edaa7f31d87a85029dc25cce7f9f76c8441e86dc8f9c33.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 152, + 481, + 264 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 481, + 148 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 481, + 148 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 481, + 148 + ], + "type": "text", + "content": "Table 2: Quantitative results for motion deblurring and super-resolution " + }, + { + "bbox": [ + 132, + 114, + 481, + 148 + ], + "type": "inline_equation", + "content": "(\\times 4)" + }, + { + "bbox": [ + 132, + 114, + 481, + 148 + ], + "type": "text", + "content": " on FFHQ dataset. Best: bold, second-best: underlined. Comparison methods are omitted if they could not be implemented reliably for the given inverse problem task." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 152, + 481, + 264 + ], + "lines": [ + { + "bbox": [ + 133, + 152, + 481, + 264 + ], + "spans": [ + { + "bbox": [ + 133, + 152, + 481, + 264 + ], + "type": "table", + "html": "
MethodMotion DeblurringSuper-Resolution (×4)
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.1430.70424.030.1680.71923.86
MCG [7]0.5650.49715.100.2290.62320.74
IIGDM [31]---0.1310.76024.48
DDRM [21]---0.1750.71124.55
Score-SDE [5,8,34]0.5460.48815.020.2570.60919.13
ZAPS (Ours)0.1410.70924.160.1040.76826.63
", + "image_path": "aec9a81b039140e37ffe8bfdda7ee69b1a75768ae66e80ffdfb4404b823626ab.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 285, + 481, + 332 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 285, + 481, + 332 + ], + "spans": [ + { + "bbox": [ + 132, + 285, + 481, + 332 + ], + "type": "text", + "content": "ear inverse problems related to deblurring. We also note that the parameters in ZAPS are adaptive, meaning one can reach the same computational complexity by adjusting total epochs or steps, in trade-off for a slight decrease in performance, as studied in Sec. 4.3." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 349, + 245, + 360 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 349, + 245, + 360 + ], + "spans": [ + { + "bbox": [ + 133, + 349, + 245, + 360 + ], + "type": "text", + "content": "4.3 Ablation Studies" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "spans": [ + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "content": "We conducted three distinct ablation studies to investigate critical aspects of our algorithm's performance. The first ablation study compared combinations of different timesteps and epochs with a fixed NFE budget, providing a nuanced exploration into the influence of specific combinations on the model's behavior. Specifically, we explored the reconstruction capabilities of the model qualitatively and quantitatively by varying the length of model timesteps, " + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "inline_equation", + "content": "S \\in \\{20, 30, 60\\}" + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "content": ". For a fixed NFE budget of 300, these corresponded to 15, 10 and 5 epochs for zero-shot fine-tuning respectively. Fig. 5a shows the final estimates, while Fig. 5b and Fig. 5c depict the corresponding loss and PSNR curves for each combination (Further quantitative results are in SuppMat). Notably, all the estimates are similar, though sharpness improves slightly as " + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "content": " increases. However, the trade-off for choosing a high " + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "content": " is the low number of epochs. Especially for cases, where the measurement system or noise level changes, this makes fine-tuning susceptible to initialization of the hyperparameters as it is more difficult to converge to a good solution in " + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "inline_equation", + "content": "\\sim 5" + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "content": " epochs. Thus, for improved generalizability and robustness, we opted to use " + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "inline_equation", + "content": "S = 30" + }, + { + "bbox": [ + 132, + 368, + 481, + 559 + ], + "type": "text", + "content": " and 10 epochs for our database testing." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 559, + 481, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 559, + 481, + 582 + ], + "spans": [ + { + "bbox": [ + 132, + 559, + 481, + 582 + ], + "type": "text", + "content": "Our second ablation study analyzed the performance of ZAPS with respect to other state-of-the-art methods when all methods used the same NFE. We" + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 136, + 616, + 478, + 664 + ], + "blocks": [ + { + "bbox": [ + 133, + 601, + 480, + 613 + ], + "lines": [ + { + "bbox": [ + 133, + 601, + 480, + 613 + ], + "spans": [ + { + "bbox": [ + 133, + 601, + 480, + 613 + ], + "type": "text", + "content": "Table 3: Computational costs of methods in terms of NFEs and wall-clock time (WCT)" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 136, + 616, + 478, + 664 + ], + "lines": [ + { + "bbox": [ + 136, + 616, + 478, + 664 + ], + "spans": [ + { + "bbox": [ + 136, + 616, + 478, + 664 + ], + "type": "table", + "html": "
DPS [6]MCG [7]IIGDM [31]DDRM [21]Score-SDE [34]ZAPS
Total NFEs10001000100201000300
WCT (s)47.2548.834.532.1223.4714.71
", + "image_path": "68517ebd14164e117a69f3161cb830c9ee3aaf03bafeb0a3822b8e527de5f9c4.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 147, + 116, + 212, + 189 + ], + "blocks": [ + { + "bbox": [ + 147, + 116, + 212, + 189 + ], + "lines": [ + { + "bbox": [ + 147, + 116, + 212, + 189 + ], + "spans": [ + { + "bbox": [ + 147, + 116, + 212, + 189 + ], + "type": "image", + "image_path": "f64c83c40a88994e2dfaca8aa2cf03c018453506feb33fdb601dd36a44708b88.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 215, + 116, + 278, + 189 + ], + "blocks": [ + { + "bbox": [ + 215, + 116, + 278, + 189 + ], + "lines": [ + { + "bbox": [ + 215, + 116, + 278, + 189 + ], + "spans": [ + { + "bbox": [ + 215, + 116, + 278, + 189 + ], + "type": "image", + "image_path": "34247b72ece3fd158f11739e848980b0f8f1537557345aff54081f9874cd768f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 280, + 116, + 341, + 189 + ], + "blocks": [ + { + "bbox": [ + 280, + 116, + 341, + 189 + ], + "lines": [ + { + "bbox": [ + 280, + 116, + 341, + 189 + ], + "spans": [ + { + "bbox": [ + 280, + 116, + 341, + 189 + ], + "type": "image", + "image_path": "b5ec4d89e67965124a53b67f88b3f1dcf59387944f65e4d7ef87eb1ce5daac42.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 343, + 116, + 403, + 189 + ], + "blocks": [ + { + "bbox": [ + 343, + 116, + 403, + 189 + ], + "lines": [ + { + "bbox": [ + 343, + 116, + 403, + 189 + ], + "spans": [ + { + "bbox": [ + 343, + 116, + 403, + 189 + ], + "type": "image", + "image_path": "50569c3f807ccef899a707036a78ec87ac4d452541ecc509f33c5e90d7a63822.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 404, + 116, + 466, + 189 + ], + "blocks": [ + { + "bbox": [ + 404, + 116, + 466, + 189 + ], + "lines": [ + { + "bbox": [ + 404, + 116, + 466, + 189 + ], + "spans": [ + { + "bbox": [ + 404, + 116, + 466, + 189 + ], + "type": "image", + "image_path": "599f2ad08f66414ab14bbd05dc763ce73f2e572715b18cd27c6c97d2b6c2e7a4.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 161, + 209, + 299, + 315 + ], + "blocks": [ + { + "bbox": [ + 146, + 190, + 467, + 209 + ], + "lines": [ + { + "bbox": [ + 146, + 190, + 467, + 209 + ], + "spans": [ + { + "bbox": [ + 146, + 190, + 467, + 209 + ], + "type": "text", + "content": "(a) Re constructions using ZAPS for super-resolution " + }, + { + "bbox": [ + 146, + 190, + 467, + 209 + ], + "type": "inline_equation", + "content": "(\\times 4)" + }, + { + "bbox": [ + 146, + 190, + 467, + 209 + ], + "type": "text", + "content": " task with different total timesteps-epochs combinations for the same " + }, + { + "bbox": [ + 146, + 190, + 467, + 209 + ], + "type": "inline_equation", + "content": "\\mathrm{NFE} = 300" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 161, + 209, + 299, + 315 + ], + "lines": [ + { + "bbox": [ + 161, + 209, + 299, + 315 + ], + "spans": [ + { + "bbox": [ + 161, + 209, + 299, + 315 + ], + "type": "image", + "image_path": "4a77e8cb353e83e072792abd2c448dd80a1b0c817e8cbb7c219f9385ea928ca7.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 162, + 318, + 296, + 327 + ], + "lines": [ + { + "bbox": [ + 162, + 318, + 296, + 327 + ], + "spans": [ + { + "bbox": [ + 162, + 318, + 296, + 327 + ], + "type": "text", + "content": "(b) Loss graphs for each combination." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 131, + 337, + 482, + 370 + ], + "lines": [ + { + "bbox": [ + 131, + 337, + 482, + 370 + ], + "spans": [ + { + "bbox": [ + 131, + 337, + 482, + 370 + ], + "type": "text", + "content": "Fig. 5: Study on different epochs and sampling steps combinations with fixed NFE. Results show similar quality for combinations with lower timestep approaches staring from higher loss/lower PSNR but converging to similar values." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 315, + 210, + 453, + 315 + ], + "blocks": [ + { + "bbox": [ + 315, + 210, + 453, + 315 + ], + "lines": [ + { + "bbox": [ + 315, + 210, + 453, + 315 + ], + "spans": [ + { + "bbox": [ + 315, + 210, + 453, + 315 + ], + "type": "image", + "image_path": "e48bd80e110129c20ea6cd0278a99d0542409bd5a9b310897a5db284d56c8db7.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 315, + 318, + 453, + 327 + ], + "lines": [ + { + "bbox": [ + 315, + 318, + 453, + 327 + ], + "spans": [ + { + "bbox": [ + 315, + 318, + 453, + 327 + ], + "type": "text", + "content": "(c) PSNR graphs for each combination." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 395, + 482, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 395, + 482, + 539 + ], + "spans": [ + { + "bbox": [ + 130, + 395, + 482, + 539 + ], + "type": "text", + "content": "investigated total NFEs of 100, 300, and 500 to demonstrate the robustness of our approach, given its adaptable parameters, as previously discussed. For 100 NFEs, we applied 20 steps (schedule = \"10,7,3\") with 5 epochs, whereas for 300 and 500 NFEs, we applied 30 steps (schedule = \"15,10,5\") and 50 steps (schedule = \"30,15,5\"), respectively, for 10 epochs. Additionally, we also implemented ZAPS with uniformly spaced noise schedules to highlight the benefits of the proposed irregular noise schedules. As seen in Tabs. 4 and 5, ZAPS with irregular noise schedules outperforms the state-of-the-art methods for NFE budgets of 100, 300 and 500 in super-resolution and random inpainting tasks. We note that we could not perform this test for deblurring experiments as IIGDM could not be implemented reliably across the database, as previously mentioned. We also note that the difference between irregular and uniform noise schedules for ZAPS is" + } + ] + } + ], + "index": 13 + }, + { + "type": "table", + "bbox": [ + 133, + 592, + 480, + 664 + ], + "blocks": [ + { + "bbox": [ + 131, + 560, + 482, + 583 + ], + "lines": [ + { + "bbox": [ + 131, + 560, + 482, + 583 + ], + "spans": [ + { + "bbox": [ + 131, + 560, + 482, + 583 + ], + "type": "text", + "content": "Table 4: Quantitative results for super-resolution " + }, + { + "bbox": [ + 131, + 560, + 482, + 583 + ], + "type": "inline_equation", + "content": "(\\times 4, \\sigma = 0.05)" + }, + { + "bbox": [ + 131, + 560, + 482, + 583 + ], + "type": "text", + "content": " on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 592, + 480, + 664 + ], + "lines": [ + { + "bbox": [ + 133, + 592, + 480, + 664 + ], + "spans": [ + { + "bbox": [ + 133, + 592, + 480, + 664 + ], + "type": "table", + "html": "
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.3440.47816.960.2570.57720.010.2180.62321.52
IIGDM [31]0.1310.76024.480.1170.75824.800.1230.76224.25
ZAPS (Uniform)0.1080.74925.920.1190.72926.290.1150.75625.63
ZAPS (Irregular)0.1060.74126.080.1040.76826.630.0950.77026.26
", + "image_path": "f41d9c78e14f9445cb4cf2deeea41cdcada8dff391b7ab51fbb353a6836aed90.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "table_body" + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 146, + 479, + 218 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "type": "text", + "content": "Table 5: Quantitative results for random inpainting (70%, σ = 0.05) on FFHQ dataset using the same NFE for each method. Best: bold, second-best: underlined." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 146, + 479, + 218 + ], + "lines": [ + { + "bbox": [ + 133, + 146, + 479, + 218 + ], + "spans": [ + { + "bbox": [ + 133, + 146, + 479, + 218 + ], + "type": "table", + "html": "
MethodNFE=100NFE=300NFE=500
LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑LPIPS↓SSIM↑PSNR↑
DPS [6]0.2680.59320.010.1890.70423.740.1520.75425.59
IIGDM [31]0.0860.84226.620.0800.84925.060.0820.84524.94
ZAPS (Uniform)0.1220.78026.200.1270.77325.870.0800.79126.94
ZAPS (Irregular)0.0850.79427.030.0780.81327.790.0710.81828.11
", + "image_path": "1e787a9dbb9a7f34b6a9af07304e54281b2add3c001034bcfad55de345ee6899.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 237, + 479, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 237, + 479, + 260 + ], + "spans": [ + { + "bbox": [ + 132, + 237, + 479, + 260 + ], + "type": "text", + "content": "less pronounced for 100 NFEs, but the advantage of irregular schedules becomes apparent for 300 and 500 NFEs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "spans": [ + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "type": "text", + "content": "The final ablation study, exploring the benefits of using distinct weights " + }, + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "type": "inline_equation", + "content": "\\zeta_t" + }, + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "type": "text", + "content": " for each timestep versus a shared weight " + }, + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "type": "inline_equation", + "content": "\\zeta" + }, + { + "bbox": [ + 132, + 261, + 479, + 285 + ], + "type": "text", + "content": " for every step, is provided in SuppMat." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 300, + 217, + 310 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 300, + 217, + 310 + ], + "spans": [ + { + "bbox": [ + 132, + 300, + 217, + 310 + ], + "type": "text", + "content": "4.4 Limitations" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "spans": [ + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "text", + "content": "The loss function we use, " + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "inline_equation", + "content": "\\mathcal{L}(\\mathbf{y},\\mathbf{x}_0) = ||\\mathbf{y} - \\mathbf{A}\\mathbf{x}_0||_2^2" + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "text", + "content": ", resembles a deep image prior-like loss [38]. However, note that there is a subtle difference in our context, where it corresponds to the log-likelihood of " + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "inline_equation", + "content": "p(\\mathbf{y}|\\mathbf{x}_0)" + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "text", + "content": ", which is different then the (approximate) log-likelihood guidance term " + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "inline_equation", + "content": "p(\\mathbf{y}|\\mathbf{x}_t)" + }, + { + "bbox": [ + 132, + 316, + 481, + 472 + ], + "type": "text", + "content": " used at each time-step. This allows for more robustness to overfitting that is typically observed in DIP-type methods. Further overfitting avoidance measures can be taken by data-splitting [3, 23, 26, 41, 42], though this was not necessary for the small number of epochs used for fine-tuning. Additionally, while our approximation in Eq. (22) produces competitive results, it is important to keep in mind that wavelets may not fully decorrelate the observed Fisher information matrix. Finally, we note that while we chose DPS as a baseline for its versatility in inverse problem tasks, the adaptive weighting strategy in ZAPS, as well as our Hessian approximation, are applicable to other posterior sampling diffusion models for inverse problems." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 487, + 218, + 499 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 487, + 218, + 499 + ], + "spans": [ + { + "bbox": [ + 132, + 487, + 218, + 499 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "type": "text", + "content": "In this work, we proposed a novel approach named zero-shot approximate posterior sampling (ZAPS), which harnesses zero-shot learning for dynamic automated hyperparameter tuning during the inference phase to enhance the reconstruction quality of solving linear noisy inverse problems using diffusion models. In particular, learning the log-likelihood weights facilitates the usage of more complex and irregular noise schedules, whose feasibility for inverse problems was shown, to the best of our knowledge, for the first time in this paper. These irregular noise schedules enabled high quality reconstructions with " + }, + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "type": "inline_equation", + "content": "20 - 50 \\times" + }, + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "type": "text", + "content": " fewer timesteps. When number of epochs for fine-tuning is also considered, our approach results in a speed boost of approximately " + }, + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "type": "inline_equation", + "content": "3 \\times" + }, + { + "bbox": [ + 132, + 510, + 481, + 665 + ], + "type": "text", + "content": " compared to state-of-the-art methods like DPS. Quantitative and qualitative evaluations on natural images illustrate our method's ability to attain state-of-the-art performance across diverse inverse problem tasks." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 140, + 481, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 140, + 481, + 152 + ], + "spans": [ + { + "bbox": [ + 132, + 140, + 481, + 152 + ], + "type": "text", + "content": "This work was partially supported by NIH R01HL153146 and NIH R01EB032830." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 170, + 197, + 182 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 170, + 197, + 182 + ], + "spans": [ + { + "bbox": [ + 133, + 170, + 197, + 182 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 194, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 138, + 194, + 480, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 194, + 480, + 217 + ], + "spans": [ + { + "bbox": [ + 138, + 194, + 480, + 217 + ], + "type": "text", + "content": "1. Alcaraz, J.M.L., Strodthoff, N.: Diffusion-based time series imputation and forecasting with structured state space models. arXiv preprint arXiv:2208.09399 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 217, + 480, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 217, + 480, + 249 + ], + "spans": [ + { + "bbox": [ + 138, + 217, + 480, + 249 + ], + "type": "text", + "content": "2. Baranchuk, D., Rubachev, I., Voynov, A., Khrulkov, V., Babenko, A.: Label-efficient semantic segmentation with diffusion models. International Conference on Learning Representations (2021)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 250, + 480, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 250, + 480, + 271 + ], + "spans": [ + { + "bbox": [ + 138, + 250, + 480, + 271 + ], + "type": "text", + "content": "3. Batson, J., Royer, L.: Noise2self: Blind denoising by self-supervision. In: International Conference on Machine Learning. pp. 524-533. PMLR (2019)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 272, + 480, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 272, + 480, + 304 + ], + "spans": [ + { + "bbox": [ + 138, + 272, + 480, + 304 + ], + "type": "text", + "content": "4. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play admm for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging 3(1), 84-98 (2016)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 304, + 480, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 304, + 480, + 336 + ], + "spans": [ + { + "bbox": [ + 138, + 304, + 480, + 336 + ], + "type": "text", + "content": "5. Choi, J., Kim, S., Jeong, Y., Gwon, Y., Yoon, S.: Ilvr: Conditioning method for denoising diffusion probabilistic models. in 2021 ieee. In: CVF international conference on computer vision (ICCV). pp. 14347-14356 (2021)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 337, + 480, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 337, + 480, + 369 + ], + "spans": [ + { + "bbox": [ + 138, + 337, + 480, + 369 + ], + "type": "text", + "content": "6. Chung, H., Kim, J., Mccann, M.T., Klasky, M.L., Ye, J.C.: Diffusion posterior sampling for general noisy inverse problems. International Conference on Learning Representations (2023)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 370, + 480, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 370, + 480, + 403 + ], + "spans": [ + { + "bbox": [ + 138, + 370, + 480, + 403 + ], + "type": "text", + "content": "7. Chung, H., Sim, B., Ryu, D., Ye, J.C.: Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems (2022)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 403, + 480, + 446 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 403, + 480, + 446 + ], + "spans": [ + { + "bbox": [ + 138, + 403, + 480, + 446 + ], + "type": "text", + "content": "8. Chung, H., Sim, B., Ye, J.C.: Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 447, + 480, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 447, + 480, + 479 + ], + "spans": [ + { + "bbox": [ + 138, + 447, + 480, + 479 + ], + "type": "text", + "content": "9. Cohen, R., Blau, Y., Freedman, D., Rivlin, E.: It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. Advances in Neural Information Processing Systems 34, 18152-18164 (2021)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 479, + 480, + 512 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 479, + 480, + 512 + ], + "spans": [ + { + "bbox": [ + 138, + 479, + 480, + 512 + ], + "type": "text", + "content": "0. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248-255. IEEE (2009)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 513, + 480, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 513, + 480, + 533 + ], + "spans": [ + { + "bbox": [ + 138, + 513, + 480, + 533 + ], + "type": "text", + "content": "1. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34, 8780-8794 (2021)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 534, + 480, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 534, + 480, + 566 + ], + "spans": [ + { + "bbox": [ + 138, + 534, + 480, + 566 + ], + "type": "text", + "content": "2. Ghael, S., Sayeed, A.M., Baraniuk, R.G.: Improved wavelet denoising via empirical wiener filtering. In: SPIE Technical Conference on Wavelet Applications in Signal Processing (1997)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 567, + 480, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 567, + 480, + 599 + ], + "spans": [ + { + "bbox": [ + 138, + 567, + 480, + 599 + ], + "type": "text", + "content": "3. Graikos, A., Malkin, N., Jojic, N., Samaras, D.: Diffusion models as plug-and-play priors. Advances in Neural Information Processing Systems 35, 14715-14728 (2022)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 600, + 480, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 600, + 480, + 632 + ], + "spans": [ + { + "bbox": [ + 138, + 600, + 480, + 632 + ], + "type": "text", + "content": "4. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th international conference on international conference on machine learning. pp. 399-406 (2010)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 632, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 632, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 632, + 480, + 665 + ], + "type": "text", + "content": "5. Hammernik, K., Küstner, T., Yaman, B., Huang, Z., Rueckert, D., Knoll, F., Akçakaya, M.: Physics-driven deep learning for computational magnetic resonance imaging. IEEE Sig Proc Mag 40, 98-114 (2023)" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 133, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 480, + 138 + ], + "type": "text", + "content": "16. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840-6851 (2020)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 138, + 480, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 138, + 480, + 171 + ], + "spans": [ + { + "bbox": [ + 133, + 138, + 480, + 171 + ], + "type": "text", + "content": "17. Hoogeboom, E., Nielsen, D., Jaini, P., Forre, P., Welling, M.: Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems 34, 12454-12465 (2021)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 171, + 480, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 171, + 480, + 203 + ], + "spans": [ + { + "bbox": [ + 133, + 171, + 480, + 203 + ], + "type": "text", + "content": "18. Kadkhodaie, Z., Simoncelli, E.: Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. Advances in Neural Information Processing Systems 34, 13242-13254 (2021)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 203, + 480, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 203, + 480, + 236 + ], + "spans": [ + { + "bbox": [ + 133, + 203, + 480, + 236 + ], + "type": "text", + "content": "19. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. Advances in Neural Information Processing Systems 35, 26565-26577 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 236, + 480, + 268 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 236, + 480, + 268 + ], + "spans": [ + { + "bbox": [ + 132, + 236, + 480, + 268 + ], + "type": "text", + "content": "20. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 4396-4405 (2019)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 268, + 480, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 268, + 480, + 289 + ], + "spans": [ + { + "bbox": [ + 132, + 268, + 480, + 289 + ], + "type": "text", + "content": "21. Kawar, B., Elad, M., Ermon, S., Song, J.: Denoising diffusion restoration models. In: Advances in Neural Information Processing Systems (2022)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 289, + 480, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 289, + 480, + 322 + ], + "spans": [ + { + "bbox": [ + 132, + 289, + 480, + 322 + ], + "type": "text", + "content": "22. Knoll, F., Hammernik, K., Zhang, C., Moeller, S., Pock, T., Sodickson, D.K., Akçakaya, M.: Deep learning methods for parallel magnetic resonance imaging reconstruction. IEEE Sig Proc Mag 37, 128-140 (2020)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 322, + 480, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 322, + 480, + 354 + ], + "spans": [ + { + "bbox": [ + 132, + 322, + 480, + 354 + ], + "type": "text", + "content": "23. Krull, A., Buchholz, T.O., Jug, F.: Noise2void-learning denoising from single noisy images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2129-2137 (2019)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 354, + 480, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 354, + 480, + 386 + ], + "spans": [ + { + "bbox": [ + 132, + 354, + 480, + 386 + ], + "type": "text", + "content": "24. Laumont, R., Bortoli, V.D., Almansa, A., Delon, J., Durmus, A., Pereyra, M.: Bayesian imaging using plug & play priors: when Langevin meets tweedie. SIAM Journal on Imaging Sciences 15(2), 701-737 (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 386, + 480, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 386, + 480, + 407 + ], + "spans": [ + { + "bbox": [ + 132, + 386, + 480, + 407 + ], + "type": "text", + "content": "25. Mardani, M., Song, J., Kautz, J., Vahdat, A.: A variational perspective on solving inverse problems with diffusion models. arXiv preprint arXiv:2305.04391 (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 407, + 480, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 407, + 480, + 440 + ], + "spans": [ + { + "bbox": [ + 132, + 407, + 480, + 440 + ], + "type": "text", + "content": "26. Moran, N., Schmidt, D., Zhong, Y., Coady, P.: Noisier2noise: Learning to denoise from unpaired noisy data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12064-12072 (2020)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 440, + 480, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 440, + 480, + 472 + ], + "spans": [ + { + "bbox": [ + 132, + 440, + 480, + 472 + ], + "type": "text", + "content": "27. Qu, Y., Zheng, N., Li, C.: Using wavelet transform to estimate the eigenfunctions of karhunen-loeve expansion. In: Wavelet Analysis and Its Applications, and Active Media Technology, pp. 39-44. World Scientific (2004)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 472, + 480, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 472, + 480, + 505 + ], + "spans": [ + { + "bbox": [ + 132, + 472, + 480, + 505 + ], + "type": "text", + "content": "28. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: International conference on machine learning. pp. 2256-2265. PMLR (2015)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 505, + 480, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 505, + 480, + 536 + ], + "spans": [ + { + "bbox": [ + 132, + 505, + 480, + 536 + ], + "type": "text", + "content": "29. Song, B., Kwon, S.M., Zhang, Z., Hu, X., Qu, Q., Shen, L.: Solving inverse problems with latent diffusion models via hard data consistency. arXiv preprint arXiv:2307.08123 (2023)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 536, + 480, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 536, + 480, + 559 + ], + "spans": [ + { + "bbox": [ + 132, + 536, + 480, + 559 + ], + "type": "text", + "content": "30. Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. International Conference on Learning Representations (2020)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 559, + 480, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 559, + 480, + 590 + ], + "spans": [ + { + "bbox": [ + 132, + 559, + 480, + 590 + ], + "type": "text", + "content": "31. Song, J., Vahdat, A., Mardani, M., Kautz, J.: Pseudoinverse-guided diffusion models for inverse problems. In: International Conference on Learning Representations (2022)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 590, + 480, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 590, + 480, + 612 + ], + "spans": [ + { + "bbox": [ + 132, + 590, + 480, + 612 + ], + "type": "text", + "content": "32. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 612, + 480, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 612, + 480, + 633 + ], + "spans": [ + { + "bbox": [ + 132, + 612, + 480, + 633 + ], + "type": "text", + "content": "33. Song, Y., Shen, L., Xing, L., Ermon, S.: Solving inverse problems in medical imaging with score-based generative models. arXiv preprint arXiv:2111.08005 (2021)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 633, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 633, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 633, + 480, + 665 + ], + "type": "text", + "content": "34. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. International Conference on Learning Representations (2020)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 295, + 102 + ], + "type": "text", + "content": "Y. U. Alçalar and M. Akçakaya" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 402 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "text", + "content": "35. Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., Hardt, M.: Test-time training with self-supervision for generalization under distribution shifts. In: International conference on machine learning. pp. 9229-9248. PMLR (2020)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 150, + 481, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 150, + 481, + 171 + ], + "spans": [ + { + "bbox": [ + 130, + 150, + 481, + 171 + ], + "type": "text", + "content": "36. Taam, W., Yandell, B.S.: Approximate Diagonalization of Spatial Covariance. University of Wisconsin, Department of Statistics (1987)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 172, + 481, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 172, + 481, + 205 + ], + "spans": [ + { + "bbox": [ + 132, + 172, + 481, + 205 + ], + "type": "text", + "content": "37. Tumanyan, N., Geyer, M., Bagon, S., Dekel, T.: Plug-and-play diffusion features for text-driven image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1921-1930 (2023)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 205, + 481, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 205, + 481, + 226 + ], + "spans": [ + { + "bbox": [ + 132, + 205, + 481, + 226 + ], + "type": "text", + "content": "38. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9446-9454 (2018)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 228, + 481, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 228, + 481, + 248 + ], + "spans": [ + { + "bbox": [ + 132, + 228, + 481, + 248 + ], + "type": "text", + "content": "39. Vincent, P.: A connection between score matching and denoising autoencoders. Neural computation 23(7), 1661-1674 (2011)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 249, + 481, + 281 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 249, + 481, + 281 + ], + "spans": [ + { + "bbox": [ + 132, + 249, + 481, + 281 + ], + "type": "text", + "content": "40. Wang, Y., Yu, J., Zhang, J.: Zero-shot image restoration using denoising diffusion null-space model. The Eleventh International Conference on Learning Representations (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 282, + 481, + 314 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 282, + 481, + 314 + ], + "spans": [ + { + "bbox": [ + 132, + 282, + 481, + 314 + ], + "type": "text", + "content": "41. Yaman, B., Hosseini, S.A.H., Moeller, S., Ellermann, J., Ugurbil, K., Akçakaya, M.: Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 84(6), 3172-3191 (Dec 2020)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 315, + 481, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 315, + 481, + 335 + ], + "spans": [ + { + "bbox": [ + 132, + 315, + 481, + 335 + ], + "type": "text", + "content": "42. Yaman, B., Hosseini, S.A.H., Akçakaya, M.: Zero-shot self-supervised learning for MRI reconstruction. Proc ICLR (2021)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 336, + 481, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 336, + 481, + 369 + ], + "spans": [ + { + "bbox": [ + 132, + 336, + 481, + 369 + ], + "type": "text", + "content": "43. Yang, L., Ding, S., Cai, Y., Yu, J., Wang, J., Shi, Y.: Guidance with spherical gaussian constraint for conditional diffusion. In: International Conference on Machine Learning (2024)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 369, + 481, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 369, + 481, + 402 + ], + "spans": [ + { + "bbox": [ + 132, + 369, + 481, + 402 + ], + "type": "text", + "content": "44. Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R., Gool, L.V.: Denoising diffusion models for plug-and-play image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (NTIRE) (2023)" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 270, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 270, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Approximate Posterior Sampling" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_content_list.json b/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3829ab0ca63d97ea21fed1e2676fb2d36e66fbb8 --- /dev/null +++ b/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_content_list.json @@ -0,0 +1,1693 @@ +[ + { + "type": "text", + "text": "Zero-Shot Detection of AI-Generated Images", + "text_level": 1, + "bbox": [ + 243, + 141, + 758, + 162 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Davide Cozzolino $^{1}$ , Giovanni Poggi $^{1}$ , Matthias Nießner $^{2}$ , and Luisa Verdoliva $^{1,2}$", + "bbox": [ + 261, + 188, + 740, + 219 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 University Federico II of Naples, 80125 Naples, Italy", + "bbox": [ + 318, + 231, + 683, + 247 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "2 Technical University of Munich, 85748 Garching, Germany {davide.cozzolino, poggi, verdoliv}@unina.it, niessner@tum.de", + "bbox": [ + 271, + 247, + 730, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALL-E, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image. Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than $3\\%$ over the SoTA in terms of accuracy. Code is available at https://grip-unina.github.io/ZED/.", + "bbox": [ + 261, + 303, + 743, + 609 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 215, + 630, + 375, + 645 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The quality of AI-generated images has improved tremendously in recent years, to the point where they are virtually indistinguishable from real images upon visual inspection. In addition, the latest generators are widely available online and allow easy creation and retouching of images based on simple textual prompts. All this opens the way to endless application opportunities in a variety of fields, from the creative arts to industries of all kinds. However, on the flip side, such tools can be also used for malicious purposes, thus posing serious threats to our society. For example, pre-trained generators can be easily optimized to generate fake works by a specific artist [31], or used to orchestrate effective, large-scale disinformation campaigns to influence public opinion in advanced democracies [20]. These immediate risks create an urgent need for reliable and automated detection of AI-generated images [41].", + "bbox": [ + 212, + 657, + 787, + 840 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/1861bae1c211181a4ebb9c70feb93a8a2ecf71a22074b8febb69e2f5c4f61f21.jpg", + "image_caption": [ + "Fig. 1: ZED leverages the intrinsic model of real images learned by a state-of-the-art lossless image coder. For real images, the model is correct and the actual coding cost is close its expected value. Synthetic images have different statistics than real images, so they \"surprise\" the encoder, and the actual coding cost differs significantly from its expected value. This is evident from the graphic on the right that shows how the coding cost gap increases for synthetic images much more than for real ones when predicting high resolution details from low resolution data." + ], + "image_footnote": [], + "bbox": [ + 223, + 146, + 584, + 256 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/7df157d8f47b6ef8c4e992f84e6981c61fe476db9c268abc7921986a937978cf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 616, + 145, + 781, + 258 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Until very recently, supervised learning paradigms dominated the image forensics community, with deep models trained on large datasets of real and fake images [64]. These approaches, however, are tailored to specific domains and are difficult to generalize to unseen deepfake samples. In the seminal paper by Wang et al. [74], it is shown that a simple detector trained only on ProGAN images from 20 different categories generalizes well to other images created by different generative adversarial networks (GAN) thanks to suitable augmentation. However, performance still suffers on images generated by prompt-driven diffusion models (DM). Similarly, a detector suitably trained on Latent DM images performs well on all other DM images but fails to generalize properly on GAN images [10]. To reduce the dependence on training data, recent works [2, 11, 51, 67] rely on general-purpose features extracted by pre-trained visual-language models, such as CLIP (Contrastive Language-Image Pre-Training) [56]. Despite the good performance, these methods still depend on the choice of the training dataset. A recent trend to improve generalization is based on few-shot methods [12, 17, 33] which can partially solve the problem, but still require some prior knowledge of the target models, even if limited to a few images. With this work we make a step further and develop an approach that is not influenced at all by newer and previously unseen generative models.", + "bbox": [ + 212, + 383, + 787, + 671 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To this end, we propose a zero-shot detection method that only requires real images for learning their underlying distribution. Our key idea is to use lossless coding and a multi-resolution prediction strategy for computing conditional distributions of all image pixels at three different levels of resolution. Given such distributions, we compute statistics related to the actual and expected coding cost. If the image is coherent with the predicted distribution (no surprise), then there is no mismatch and the image under analysis is labelled as real. We expect synthetic images to be characterized by a higher coding cost under the distribution of real images (see Fig. 1). Based on this intuition, we design discriminative features that measure how well the image under test fits the model of real images embedded in the encoder. Even by using a single feature, we can obtain", + "bbox": [ + 212, + 672, + 789, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 227, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 377, + 127 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "significant performance above $95\\%$ in terms of AUC for several recent models, such as DALL·E, Midjourney, and SDXL.", + "bbox": [ + 212, + 145, + 782, + 175 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In summary, the main contributions of this paper are the following:", + "bbox": [ + 238, + 176, + 723, + 191 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- we propose a zero-shot detector of artificially generated images: no fake images are necessary for training which guarantees independence from any specific generation method;", + "- this is the first work that exploits an implicit model of real images, learnt for lossless encoding to address image forensics task;", + "- our experiments show on a wide variety of generative models that even using a single feature the proposed detector provides state-of-the-art results $(+3.4\\%$ in terms of accuracy)." + ], + "bbox": [ + 225, + 205, + 782, + 325 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related work", + "text_level": 1, + "bbox": [ + 214, + 354, + 380, + 369 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Supervised learning. The problem of distinguishing synthetic images from real ones is commonly formulated as a binary classification task. State-of-the-art methods explicitly or implicitly exploit forensic artifacts by leveraging a large amount of real and generated images. Some of them rely on semantic flaws, such as face asymmetries [4] or incorrect perspective, lighting, shadows [21, 22, 65]. However, technology advances very quickly and such errors will very likely disappear in next-generation tools. Therefore, most methods focus on low-level and inconspicuous artifacts [9, 18]. Major efforts have been made to prevent conventional supervised detectors from overfitting the training data. Popular recipes include using datasets as varied as possible with intense augmentation [74], pre-training models on large general-purpose datasets [46], preserving fine-grain details of images [7, 27], exploiting high-frequency artifacts in the spatial [43, 68, 72] or Fourier domain [18, 24, 78], leveraging inter-pixel correlation discrepancies [71, 79], adopting inversion techniques [1, 75].", + "bbox": [ + 212, + 378, + 785, + 589 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "With the advent of diffusion models that presents significant architectural differences with GANs, the importance to design methods that work equally well on known and unknown sources became even more evident [10]. An important finding was the increased generalization that could be achieved using pre-trained large vision-language models, such as CLIP-ViT [51]. In this case only a lightweight linear classifier is trained on top of these features to adapt to the forensic task. Very good performance is obtained on DMs even if the network was trained only on GANs. Other methods also show the potential of such approach [2, 11, 59], sometimes including multimodal features [44, 67].", + "bbox": [ + 212, + 590, + 785, + 726 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Some supervised methods assume to have only real images available and create the synthetic images needed for training by simulating the artifacts introduced by a generator, for example by passing real images through an autoencoder [24,34,78]. The more generative architectures are simulated, the more effective is the detector. Of course, the performance degrades on images generated by an architecture not considered in the simulation phase. Differently from all these methods our approach does not require collecting or generating synthetic images thus avoiding any type of dependence on this class.", + "bbox": [ + 212, + 727, + 785, + 848 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 114, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Few-shot/incremental learning. A significant step towards improved generalization is the use of few-shot or incremental learning strategies [12, 17, 33, 47]. Along this path, a recent work [19] proposes to regularly re-train a detector on new synthetic generators in the very same temporal order of their release, as in a real-world scenario. Results show a good generalization to unseen models, but only as long as the architecture of new generators is similar to that of old ones. Although few-shot methods represent an important progress in reducing the dependence on training data, the ultimate goal is to remove this dependence entirely to ensure maximum generalization. In pursuit of this goal, in this work we propose a truly zero-shot detector.", + "bbox": [ + 212, + 146, + 787, + 297 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Zero-shot learning. Only a few very recent papers avoid training on synthetic data altogether. A solution was proposed in [60] based on the observation that synthetic images are reconstructed more accurately than real images by a latent DM autoencoder. The main limitation is that the method only reliably detects images generated by latent diffusion models. The method in [30], instead, exploits the fact that small perturbations of [real/synthetic] images correspond to [small/large] variations in the embedding space of a pre-trained large model. Differently from these strategies our work takes inspiration from some interesting proposals that have recently appeared for synthetic text detection [25,29,49,69]. They exploit the fact that LLMs (Large Language Models) work by generating the probability distribution of the next token given the previous ones. In the generation phase, new tokens are sequentially added to a sentence based on these distributions. In the analysis phase, one can replicate the process for a given sentence under test and measure how well the actual tokens match the predicted ones. A good match suggests that the sentence was indeed generated by an LLM. Although inspired by these methods, our zero-shot synthetic image detector differs from them because it leverages a model of real images and does not depend in any way on synthetic data or generators. Moreover, to build the model we take advantage of the remarkable field-proved ability of lossless encoders to accurately describe pixels based on their context.", + "bbox": [ + 212, + 310, + 787, + 613 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 215, + 636, + 330, + 652 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Background", + "text_level": 1, + "bbox": [ + 215, + 669, + 362, + 686 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Here we provide some background on zero-shot methods that leverage large pre-trained language models for machine-generated text detection. They exploit the native functionality of these models to provide next-token predictions [29]. Before a string of characters $s$ can be processed by a language model, it must be parsed into a sequence of tokens (mostly words). The tokenizer $T$ outputs a list of indices", + "bbox": [ + 212, + 695, + 787, + 784 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nT: s \\rightarrow \\left\\{x _ {0}, x _ {1}, \\dots , x _ {L} \\right\\}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 411, + 786, + 785, + 803 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $x_{i} \\in \\{1, \\dots, n\\}$ is the index of the $i$ -th token of the sequence, addressing a size- $n$ vocabulary of tokens. The language model operates by predicting the next", + "bbox": [ + 212, + 809, + 785, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 377, + 128 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "index-token given the list of previous ones, thereby allowing for the generation of a full sentence given just a short prompt. Actually, language models output more information than just the index of the most likely token. Given the list of previous indices $X_{i} = \\{x_{0},\\ldots ,x_{i - 1}\\}$ , they provide the probability of all possible values of the current one, that is, $P(x_{i} = k|X_{i})$ , for $k = 1,\\dots ,n$ .", + "bbox": [ + 212, + 146, + 782, + 222 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The idea is to exploit this functionality to measure the conformity of the string under analysis to the LLM intrinsic model of language. That is, these methods try to answer the question \"How likely is it that this sentence was generated by my LLM?\" Hence they compute (for free) the likelihood of the given list of indices under the probability distribution learned by the LLM", + "bbox": [ + 212, + 222, + 782, + 297 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nP \\left(x _ {0}, \\dots , x _ {L}\\right) = P \\left(x _ {0}\\right) \\cdot P \\left(x _ {1} \\mid x _ {0}\\right) \\cdot \\dots \\cdot P \\left(x _ {L} \\mid x _ {0}, \\dots , x _ {L - 1}\\right) = P \\left(x _ {0}\\right) \\prod_ {i = 1} ^ {L} P \\left(x _ {i} \\mid X _ {i}\\right) \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 215, + 309, + 782, + 364 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In practice, the negative log-likelihood (also called log-perplexity) is computed instead, that is (neglecting $x_0$ )", + "bbox": [ + 212, + 364, + 782, + 395 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {N L L} = - \\sum_ {i = 1} ^ {L} \\log P \\left(x _ {i} \\mid X _ {i}\\right) \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 406, + 406, + 782, + 446 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "If the $i$ -th observed index $x_{i}$ was very likely to come after the previous ones, namely, it is not surprising, its contribution to the NLL is close to 0. On the contrary, if it was unlikely to appear, given the previous ones (an anomaly) it impacts significantly on the NLL. Overall, a sequence with low NLL is likely to have been generated by the LLM, and will be therefore detected as synthetic. Of course, this basic description is only meant to convey the general concepts, the reader is referred to the literature [26] for more details.", + "bbox": [ + 212, + 458, + 784, + 564 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 From Text to Images", + "text_level": 1, + "bbox": [ + 214, + 587, + 434, + 603 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "When we try to translate the above concepts into the realm of images, we run into a big problem: the most effective and popular image generation engines do not provide anything similar to the next token distribution observed in the case of LLMs. Indeed, there exist some autoregressive synthesis methods [45,58] that could be adapted to this task, but their generation approach is very different from those of the most popular GAN- and DM-based methods. Therefore in this work we change perspective or, better said, we now assume the correct one-class perspective, and look for a model of real images, rather than synthetic ones. Armed with such a model, we will be able to decide whether a given image is unsurprising, therefore real, or somewhat anomalous, therefore synthetic, regardless of the specific generation model used to create it.", + "bbox": [ + 212, + 613, + 784, + 777 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Now, the concepts of prediction, surprise, perplexity, along with information measure and entropy, are pervasive in the literature on image coding, part of information theory. Lossless image encoders typically include a predictor that, given a suitable context, estimates the value of the target pixel, and an entropy", + "bbox": [ + 212, + 779, + 784, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 732, + 128 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 114, + 782, + 125 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "encoder that efficiently represents prediction errors. Indeed, by analyzing the recent literature in the field we managed to single out a tool that perfectly suits our needs, the Super-Resolution based lossless Compressor (SReC) proposed by Cao et al. [6], which provides a computationally lightweight tool for predicting the distribution of image pixels at multiple resolution.", + "bbox": [ + 212, + 146, + 787, + 223 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3 Super-resolution based Lossless Compressor", + "text_level": 1, + "bbox": [ + 214, + 244, + 624, + 260 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Here we present a high-level description of SReC, focusing only on the aspects more relevant for our purposes. The interested reader is referred to the original paper for details [6]. The general idea is to train a neural network to predict the current pixel, $x_{i,j}$ , given a set of previously coded pixels, and encode the difference between the true pixel value and its prediction. However, this purely autoregressive formulation is highly impractical, as it implies long encoding/decoding times. Therefore, SReC uses a multi-resolution prediction strategy. A low-resolution version $y^{(1)}$ of the original image $x^{(0)}$ is built through $2\\times 2$ average pooling, that is", + "bbox": [ + 212, + 270, + 787, + 407 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\ny _ {i, j} ^ {(1)} = \\frac {x _ {2 i , 2 j} ^ {(0)} + x _ {2 i + 1 , 2 j} ^ {(0)} + x _ {2 i , 2 j + 1} ^ {(0)} + x _ {2 i + 1 , 2 j + 1} ^ {(0)}}{4} \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 334, + 417, + 785, + 454 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Then, each four-pixel group of the high-resolution image is predicted based only on the low-resolution image, independent of other groups at the same resolution level, allowing for parallel processing and high-speed encoding. Since the fourth pixel of a group is known, given the other three and the low resolution image, the conditional joint distribution of the group reads", + "bbox": [ + 212, + 460, + 787, + 539 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} P \\left(x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, x _ {2 i, 2 j + 1} ^ {(0)} \\mid Y _ {i, j} ^ {(1)}\\right) = P \\left(x _ {2 i, 2 j} ^ {(0)} \\mid Y _ {i, j} ^ {(1)}\\right) \\cdot P \\left(x _ {2 i + 1, 2 j} ^ {(0)} \\mid x _ {2 i, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}\\right) \\tag {5} \\\\ \\cdot P (x _ {2 i, 2 j + 1} ^ {(0)} | x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 235, + 547, + 785, + 593 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $Y_{i,j}^{(1)}$ is the relevant context in the lower resolution image, that is a receptive field centered on $y_{i,j}^{(1)}$ . Each term in this factorization is estimated by a dedicated convolutional neural network (CNN). In particular, a parametric distribution is assumed, given by the mixture of $K$ discrete logistic distributions,", + "bbox": [ + 212, + 606, + 787, + 674 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nP (x | X) = \\sum_ {k = 1} ^ {K} w _ {k} \\operatorname {l o g i s t i c} \\left(x \\mid \\mu_ {k}, s _ {k}\\right) \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 379, + 685, + 785, + 727 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\mathrm{logistic}(x|\\mu, s) = \\sigma\\left(\\frac{x - \\mu + 0.5}{s}\\right) - \\sigma\\left(\\frac{x + \\mu + 0.5}{s}\\right)$ is the difference of two sigmoid functions, with position parameter $\\mu$ and scale parameter $s$ , and $K = 10$ is always assumed. The CNN takes the context $X$ of the pixel of interest as input and outputs the weights of the mixture together with the position and scale parameters of all logistics. In turn, these parameters allow one to compute the desired distribution. This whole process is replicated on two more lower-resolution scales, for a total of four levels, the lowest resolution, an $8 \\times 8$ subsampled \"prompt\"", + "bbox": [ + 212, + 732, + 787, + 842 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 227, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 375, + 127 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/efc16c4e1e602383bd83a3a98e2d204a0bb468d420e4cb55038d4ab3ccbcebd8.jpg", + "image_caption": [ + "Reale", + "Fig. 2: NLL and Entropy. We compute the spatial distribution of NLL and Entropy at three resolutions. For real images (top) the paired maps are very similar at all scales: when the uncertainty on a pixel (entropy) grows, also the coding cost (NLL) does. Therefore, the NLL-Entropy difference maps are all very dark. For synthetic images (bottom) NLL and Entropy maps are not always similar, because the model is not correct, and hence the difference maps are brighter, especially the high-resolution map." + ], + "image_footnote": [], + "bbox": [ + 263, + 148, + 769, + 354 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "image, coded in clear, and three higher resolution images, each one predicted from its lower resolution version. All networks are trained to minimize the cross entropy between the predicted model probability $P_{\\theta}(x)$ and the empirical data distribution $P(x)$ given by the training image dataset. We mention in passing that this loss is closely related to the log-perplexity considered for text synthesis.", + "bbox": [ + 212, + 460, + 787, + 536 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To summarize, SReC provides us with a lightweight tool for computing conditional distributions of all image pixels at three different levels of resolution, and therefore to compute all kinds of statistics that can expose the mismatch between a test image and the learned model. Considering that SReC achieves state-of-the-art performance in lossless image compression, one can also argue that the learned model of real images is very accurate. Given this tool, we can now design a zero-shot detector of synthetic images.", + "bbox": [ + 212, + 537, + 787, + 642 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "3.4 Features and Decision Statistics", + "text_level": 1, + "bbox": [ + 212, + 662, + 524, + 678 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Let $x \\in \\{0, \\ldots, 255\\}^{N \\times M \\times 3}$ be the image under test. In our multi-resolution framework, this will be the highest-resolution version, $x^{(0)} = x$ . Through $2 \\times 2$ average pooling, we generate a lower resolution version $y^{(1)} = \\mathrm{avpool}(x^{(0)})$ , and then, through rounding, its integer-valued version $x^{(1)} = \\mathrm{round}(y^{(1)})$ . The process is repeated, and eventually we have four integer versions of the image $\\{x^{(0)}, x^{(1)}, x^{(2)}, x^{(3)}\\}$ , together with three non-integer versions $\\{y^{(1)}, y^{(2)}, y^{(3)}\\}$ . In the context of lossless coding, the lowest resolution version, $x^{(3)}$ , must be sent in clear together with the rounding bits at levels 3, 2, and 1, but we mention this only for completeness and for a more compelling interpretation of results. The CNNs trained on real images provide the predicted probability distribution", + "bbox": [ + 212, + 688, + 787, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 732, + 128 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 114, + 784, + 125 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/21fdcf61e015902664f93189c579d3c6e08e3d04b3288e0f29544ae1cf64a3df.jpg", + "image_caption": [ + "Fig. 3: Extracting decision statistics. The full resolution image $x^{(0)}$ is downsampled three times. The lowest-resolution version, $x^{(3)}$ , feeds the level-2 CNN, which outputs the probability distributions of level-2 pixels. These distributions, together with the actual level-2 pixels, are used to compute the level-2 coding cost $\\mathrm{NLL}^{(2)}$ and its expected value $H^{(2)}$ . All these steps are then repeated for levels 1 and 0. Eventually, NLLs and entropies are combined to compute the decision statistics." + ], + "image_footnote": [], + "bbox": [ + 218, + 146, + 785, + 280 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "for all pixels $^3$ of levels 0, 1, and 2", + "bbox": [ + 212, + 387, + 460, + 402 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\nP \\left(x _ {i, j} ^ {(l)} = k \\mid X _ {i, j} ^ {(l)}\\right) \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 439, + 410, + 785, + 433 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "where $k \\in \\{0, \\dots, 255\\}$ and $X_{i,j}^{(l)}$ is the context for pixel $x_{i,j}^{(l)}$ , including a portion of the lower-resolution image $y^{(l+1)}$ and possibly some same-resolution neighbors of the current pixel. Given the above distribution, we compute the negative log likelihood and the entropy at each pixel", + "bbox": [ + 212, + 440, + 787, + 507 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathrm {N L L} _ {i, j} ^ {(l)} = - \\log P (x _ {i, j} ^ {(l)} | X _ {i, j} ^ {(l)}) \\\\ H _ {i, j} ^ {(l)} = - \\sum_ {k} P (k | X _ {i, j} ^ {(l)}) \\log P (k | X _ {i, j} ^ {(l)}) \\tag {8} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 359, + 513, + 784, + 570 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "These quantities are shown in Fig.2 for two sample images, real and synthetic. Then, through spatial averaging, we obtain the corresponding quantities for the images at all resolution levels $\\mathrm{NLL}^{(l)} = \\langle \\mathrm{NLL}_{i,j}^{(l)}\\rangle$ and $H^{(l)} = \\langle H_{i,j}^{(l)}\\rangle$ , for $l = 0,1,2$ . These are the features associated by the system to input image and our decision statistics will be suitable combinations of them.", + "bbox": [ + 212, + 577, + 787, + 654 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Before going on, it is convenient to give a physical interpretation of these quantities. Each NLL can be interpreted as the actual coding cost for the corresponding image. While each entropy can be interpreted as the expected value of the coding cost given the context, when the image is coherent with the predicted distribution. In the presence of a mismatch, $\\mathrm{NLL} - H > 0$ , on the average, with a gap that increases with increasing distribution mismatch. Our fundamental assumption is that the trained CNNs provide a good model of real images, and synthetic images tend not to follow the same model. Therefore, we expect that synthetic images are characterized by higher coding cost, hence higher NLL, under this distribution. This observation would lead us to use the NLLs as decision", + "bbox": [ + 212, + 655, + 787, + 806 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 227, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 377, + 127 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "3 More precisely, all color components of all pixels, but to simplify notations, in the following we will neglect color and treat the image as if grayscale.", + "bbox": [ + 217, + 810, + 787, + 840 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "statistics. However, the coding cost does not depend only on the distribution mismatch but also (predominantly) on the intrinsic information content of the image, measured by the entropy. A complex image, say a photo of a crowd, is more difficult to encode/describe than a smooth image, say a blue sky, no matter what model we use. Therefore, to get rid of this bias, we consider the coding cost gap, defined as the difference $D^{(l)} = \\mathrm{NLL}^{(l)} - H^{(l)}$ , as decision statistic. Hence, for each image, we have three basic decision statistics, one for each resolution level. It is worth observing that some forms of normalization are adopted for machine generated text detection as well [29, 49, 70]. A block diagram of our method is shown in Fig.3.", + "bbox": [ + 212, + 146, + 787, + 297 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "A sample graph of the coding cost gap is shown in Fig.1, on the right. For real images and three families of synthetic images we report the average gap (solid line) plus/minus its standard deviation (colored band) for the various resolutions levels. Two important observations can be made. First of all, the level-0 coding cost gap, concerning the full resolution image, seems to be much more discriminant than the others. Moreover, the gap grows much faster for synthetic images than for real images when going from level 1 to level 0. Therefore, as decision statistics we will consider both $D^{(0)}$ (the level-0 coding cost gap) and $\\Delta^{01} = D^{(0)} - D^{(1)}$ (its slope). In addition, in preliminary experiments we observed that synthetic images are sometimes characterized by a coding cost much lower rather than much higher than expected, that is the NLL is much lower than the entropy. This is also an anomaly, which signals the likely synthetic nature of the image. Therefore, besides the above statistics we also consider their absolute values $|D^{(0)}|$ and $|\\Delta^{(01)}|$ . These observations are supported by the sample graphical analysis shown in Fig.5 in the ablation study.", + "bbox": [ + 212, + 297, + 787, + 525 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4 Results", + "text_level": 1, + "bbox": [ + 215, + 545, + 323, + 560 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Datasets and Metrics", + "text_level": 1, + "bbox": [ + 215, + 575, + 439, + 589 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We benchmarked our model on a large variety of synthetic generators both GANs and DMs: GauGAN [53], BigGAN [5], StarGAN [8], StyleGAN2 [38], DiffusionGAN [76], GigaGAN [35], GALIP [73], DDPM [32], ADM [16], GLIDE [50], Stable Diffusion [62, 63], DiT [54], DeepFloyd-IF [39], Stable Diffusion XL [55], DALL-E [14], DALL-E 2 [57], DALL-E 3 [52], Midjourney V5 [48], and Adobe Firefly [23]. We collected images from publicly available datasets [3,10,51,74] and generated additional images as needed when they were not publicly available. We ensured that all datasets included pristine and synthetic images with similar semantic content, both compressed and uncompressed, to avoid any kind of bias (see Fig.4). For some synthetic generators we have multiple datasets, built on the basis of different real image datasets LSUN [77], FFHQ [37], ImageNet [15], COCO [42], LAION [66] and RAISE [13]. This is a fortunate circumstance: we kept them carefully separate as this allows us to analyze how the performance of a detector depends on the class of real images used in the synthesis phase. Overall we used a total of $29\\mathrm{k}$ synthetic images and $6\\mathrm{k}$ real images. More details on the generated and actual images are provided in the supplementary material.", + "bbox": [ + 212, + 598, + 787, + 843 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 114, + 784, + 126 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/231219b8aa647713db5823eb166fc61ec2b1b695db14bba71ce64e96e2058439.jpg", + "image_caption": [ + "LSUN" + ], + "image_footnote": [], + "bbox": [ + 272, + 156, + 382, + 242 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/bb239cbf94a53ccb942aec78d1e3ee4b36954e1eb7f2c502e43523006d518b25.jpg", + "image_caption": [ + "FFHQ" + ], + "image_footnote": [], + "bbox": [ + 388, + 156, + 500, + 242 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/4a115c6e880666decef07509d79bc9ba88b0390e613c14fe5dc880a36a060486.jpg", + "image_caption": [ + "ImageNet" + ], + "image_footnote": [], + "bbox": [ + 504, + 156, + 614, + 242 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/b7bfe6362a66142dbd2a2f70ca76f862faa3a7c5bee546a67beb53a1ed0ef7d0.jpg", + "image_caption": [ + "COCO" + ], + "image_footnote": [], + "bbox": [ + 617, + 156, + 728, + 242 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/9e37f264c6fba4e67044aa0bce9e3ee7cc4a3416751b1ccf134f277112e9f7a6.jpg", + "image_caption": [ + "Diffusion-GAN" + ], + "image_footnote": [], + "bbox": [ + 272, + 244, + 382, + 329 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/943fc5ca86f34081a895e86e759e04053730a83344509087c558dbc13ff9aad0.jpg", + "image_caption": [ + "StyleGAN2" + ], + "image_footnote": [], + "bbox": [ + 388, + 244, + 500, + 329 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/83d9e8f5b5175e4b859f3d3a120ac90a5d575b32bbc9550e813c73b4bd92c395.jpg", + "image_caption": [ + "DiT", + "Fig. 4: Examples of real and AI-generated images of different categories used in our experiments. Top: real images from LSUN, FFHQ, ImageNET and COCO. Bottom: generated images from DiffusionGAN, StyleGAN2, DiT and SDXL." + ], + "image_footnote": [], + "bbox": [ + 503, + 244, + 614, + 329 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/11001dc1a4c83fd4e760e098c0c48c0af1508c6ba61af9b46c97fd33aba88f7f.jpg", + "image_caption": [ + "SDXL" + ], + "image_footnote": [], + "bbox": [ + 617, + 244, + 728, + 329 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Following other papers [11, 43, 51] we measure performance using the area under the ROC curve (AUC) and the balanced accuracy. We also show the influence of the threshold selection on the performance.", + "bbox": [ + 212, + 426, + 784, + 472 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.2 Ablation Study", + "text_level": 1, + "bbox": [ + 214, + 494, + 387, + 510 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Features analysis. First, we want to provide a better insight into the role and importance of the features described in Section 3.4: $D^{(0)}$ (the 0-level coding cost gap), its slope $\\varDelta^{01} = D^{(0)} - D^{(1)}$ and their absolute values. To this end, we consider the set of real and synthetic (DALL-E 2, GLIDE, Midjourney, SDXL) images of the Synthbuster dataset [3]. We note, in passing, that this dataset includes only uncompressed images, which dispels any possible doubt that our method exploits some JPEG compression bias between real and fake images [28]. Some selected scatter plots and graphs are shown in Fig.5. The rightmost box shows that encoding cost (NLL) and entropy ( $H$ ) alone are not very discriminating, even if computed at the more informative level 0 (high resolution). In contrast, their difference, the 0-level coding cost gap $D^{(0)}$ , seems to separate the different classes quite well (central box), in particular the real class (violet) from the others. Note that the level-1 gap (not shown) is not equally discriminating, and the level-2 gap, plotted on the $y$ axis, turns out to be essentially useless. In the third box we plot the empirical distributions of $D^{(0)}$ for the various classes. This representation makes the good separability of the classes further clear but also highlights an unexpected phenomenon: GLIDE images group mostly to the left of the real class, that is, they have a lower-than-expected coding cost. Although not in line with our initial hypotheses, this fact nevertheless represents an anomaly, which can be detected by thresholding the absolute value of the statistic rather than the statistic itself.", + "bbox": [ + 212, + 522, + 787, + 839 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 377, + 127 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/e0d1f90588a1d2d1fe7366bc64d08cf8c2465ccdafa765b49781168c5e54eaaf.jpg", + "image_caption": [ + "Fig. 5: Decision statistics. NLL and entropy by themselves are not discriminant (left). Their difference (center) is much more useful for detection, but only at high resolution, $D^{(0)}$ , while $D^{(1)}$ is less discriminant and $D^{(2)}$ basically useless. Right box shows histograms of $D^{(0)}$ for real and synthetic images. Note that for GLIDE, $D^{(0)}$ is negative, on the average. Good discrimination is still possible based on the absolute value." + ], + "image_footnote": [], + "bbox": [ + 233, + 143, + 767, + 294 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/9decf8c6e1180ab5e73dda0f803d59989cc177f1486b60612b4544c92cec3c53.jpg", + "image_caption": [ + "Fig. 6: AUC of proposed method as a function of decision statistic (see Section 3.4) and dataset of real images used to train the lossless encoder: Open Images, LAION, COCO, and their augmented versions $(^{*})$ . Synthetic test images are selected to match the corresponding real test images: ImageNet (top), and LAION (bottom)." + ], + "image_footnote": [], + "bbox": [ + 285, + 398, + 743, + 604 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Influence of the real class. To better understand the role of the real dataset used to train the lossless encoder, we perform an experiment in which we vary it. Along with the original encoder pre-trained on the Open Images dataset [40] (about 338k high-resolution images), we consider two other versions, trained from scratch on the LAION dataset [66] ( $\\simeq 117\\mathrm{k}$ ), and the COCO dataset [42] ( $\\simeq 106\\mathrm{k}$ ), respectively, using the same hyperparameters as [6]. Additionally, we consider versions (marked with *) trained on the same datasets, augmented with JPEG compressed images with quality between 80 and 100. We compute the performance in terms of AUC on two different datasets of synthetic and", + "bbox": [ + 212, + 703, + 787, + 840 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 114, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/b4ce44c507eaed16458689a225ce5cf10053c9720f21f8b78f2c58f1cb6c23ec.jpg", + "table_caption": [ + "Table 1: Reference methods. For each one we indicate the key idea, the datasets of real and synthetic images used for training with their sizes, whether or not augmentation is used, the test strategy." + ], + "table_footnote": [], + "table_body": "
Acronym [ref]Idea/ApproachTraining Real/FakeSize(K)Augment.Test Strategy
Wang2020 [74]High diversityLSUN/ProGAN360/360global pooling
PatchFor. [7]Patch-basedCelebA,FF/various84/272resizing
Liu2022 [43]Noise-basedLSUN/ProGAN360/360global pooling
Corvi2023 [10]No-downsamplingCOCO,LSUN/Latent180/180global pooling
LGrad [72]Gradient-basedLSUN/ProGAN72/72resizing
DIRE [75]InversionLSUN-Bed/ADM40/40resizing
DE-FAKE [67]Prompt-basedLSUN/Stable Diff.20/20resizing
Ojha2023 [51]CLIPLSUN/ProGAN360/360cropping
NPR [71]ResidualLSUN/ProGAN72/72resizing
AEROBLADE [60]AE rec. error- / -- / -global distance
", + "bbox": [ + 217, + 198, + 784, + 364 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "real images, where this latter class comes from ImageNet [15] (Fig.6, top) or LAION [66] (Fig.6, bottom). We can observe that the best and more uniform results across the four decision statistics are obtained using $\\mathrm{COCO}^*$ , while training on Open Images guarantees good performance if the real class is LAION, but bad performance if it is ImageNet. Additional results are included in the supplementary material.", + "bbox": [ + 212, + 393, + 787, + 488 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "4.3 SoTA Comparison", + "text_level": 1, + "bbox": [ + 214, + 508, + 415, + 525 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "In our analysis we include only methods with code and/or pre-trained models publicly available on-line. Eventually, we included 7 CNN-based methods [7,10, 43, 71, 72, 74, 75], 2 CLIP-based methods [51, 67] and a training-free method [60]. A brief summary of these techniques is provided in Tab.1, while a more detailed description is given in the supplementary material. For a fair comparison we avoid testing on ProGAN [36] and Latent Diffusion [61], because a good number of these supervised methods were trained on datasets that include images from these generators. Even so, we have a total of 30 datasets for testing. Results are reported in Tab.2 in terms of AUC, with the best figure for each dataset highlighted in bold. Note that each row is characterized by the name of the generator (e.g., GauGAN) and by a single letter that recalls the set of real images used to train it: S for LSUN, F for FFHQ, I for ImageNet, C for COCO, L for LAION, R for RAISE. This detail allows us to study how the performance depends on the real dataset (but with synthetic images from the same generator and with semantic content aligned with real images).", + "bbox": [ + 212, + 537, + 787, + 763 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "First of all, we observe that for most reference methods the average AUC does not exceed $80\\%$ . Notable exceptions are the CLIP-based Ojha2023 (88.4%) and the CNN-based Corvi2023 (89.4%). Interestingly, some methods show very different performance when the real class changes. This may be due to JPEG bias as already suggested in [28, 60]. A deeper analysis on this point is presented", + "bbox": [ + 212, + 763, + 787, + 840 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 377, + 127 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/9c3960f79f7c3e54fec294eb9aeed9f165318827e6434af26452d9e3ed072df7.jpg", + "table_caption": [ + "Table 2: AUC for reference and proposed methods. Best score in bold with a $0.5\\%$ margin. S = LSUN, F = FFHQ, I = ImageNet, C = COCO, L = LAION, R = RAISE." + ], + "table_footnote": [], + "table_body": "
Real dataWang2020PatchFor.Lin2022Corvi2023LGradDIREDEFAKEOjha2023NPRAEROBLADEOurs \\( {D}^{\\left( 0\\right) } \\)Ours \\( {\\left| D\\right| }^{\\left( 0\\right) } \\)\\( {\\Delta }^{u1} \\)\\( {\\Delta }^{u1} \\)
C98.980.899.783.881.699.943.8100.89.155.199.899.899.999.999.799.799.799.799.799.799.799.799.7
GauGANC92.785.594.783.477.299.859.059.099.686.851.992.388.695.992.388.695.992.692.692.699.799.799.7
BigGANI94.7100.99.995.973.940.445.999.781.584.0100.100.100.100.100.100.100.100.100.100.100.100.100.
StarGANF98.183.899.789.199.858.339.196.7100.30.096.696.196.796.796.796.796.796.796.596.596.596.5
StyleGAN2S94.985.199.958.482.755.547.691.071.360.143.187.741.188.787.741.188.787.787.787.787.787.7
F
GigaGANI73.761.097.350.576.499.964.394.682.447.572.468.172.468.172.468.172.468.168.168.168.168.1
C79.584.099.690.976.799.987.997.695.580.696.594.094.096.797.396.797.396.797.396.797.396.7
Diff.GANS89.892.699.596.699.549.844.897.4100.43.999.499.499.499.499.499.499.499.599.599.599.599.5
GALIPC89.798.294.387.756.7100.75.698.690.765.098.496.399.799.799.799.799.799.799.799.799.799.7
DALL-EL66.471.795.098.395.299.855.997.399.524.199.295.898.298.298.298.298.298.298.298.298.298.2
DDPMF31.698.422.8100.9.823.150.577.792.481.776.625.293.879.676.625.293.879.679.679.679.679.6
ADMS67.667.670.680.381.152.037.488.294.153.149.553.569.463.159.563.169.463.169.463.171.071.0
I61.081.994.481.172.799.569.185.378.580.387.890.595.395.395.395.395.395.395.392.192.192.1
GLIDEC64.897.496.397.281.599.992.488.895.498.047.888.588.588.588.588.588.588.588.588.588.588.5
R32.295.056.686.550.642.992.272.863.387.723.289.451.165.165.165.165.165.165.165.165.165.1
L72.674.190.886.990.3100.60.295.399.868.754.584.284.284.284.284.284.284.284.284.284.284.2
DiTI58.683.188.0100.56.299.687.477.878.499.889.484.384.384.384.384.384.384.384.384.384.384.3
Stable D. 1.4C68.286.195.3100.54.799.993.397.976.599.848.474.854.674.854.654.654.654.654.654.671.471.4
R37.961.873.4100.50.037.688.087.743.096.999.499.498.798.797.097.097.097.097.097.097.297.2
Stable D. 2C56.578.694.2100.62.899.397.982.389.399.983.090.384.584.584.584.584.584.584.584.584.584.5
R50.238.734.8100.41.435.580.789.544.097.498.596.895.895.895.895.895.895.895.895.895.895.8
SDXLC83.860.889.3100.89.399.594.080.099.387.999.999.999.999.999.999.999.999.999.999.999.999.9
R54.368.431.1100.57.247.184.485.176.769.7100.100.100.100.100.100.99.199.299.299.299.299.2
Deep.-IFC78.062.772.299.968.898.996.992.991.681.991.782.388.488.488.488.488.488.488.488.479.479.4
DALL-E 2C88.552.498.988.278.699.980.697.190.059.3100.100.100.100.100.100.100.100.100.99.999.9
R64.841.970.469.458.644.770.995.239.532.8100.100.100.100.100.100.100.100.100.100.100.
DALL-E 3C65.047.399.5100.88.499.996.286.497.799.799.799.799.598.398.398.398.398.398.398.2
R10.952.70.260.837.947.692.436.448.748.379.166.778.078.178.178.178.178.178.178.1
MidjourneyR40.257.840.7100.56.351.078.166.277.099.099.799.398.598.598.598.598.598.598.598.5
Adobe FireflyR84.849.411.898.040.657.481.497.532.152.873.641.280.880.4
AVG68.373.377.089.468.274.672.988.480.171.283.386.488.888.890.0
", + "bbox": [ + 243, + 184, + 759, + 551 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "in the supplementary material. The proposed zero-shot approach goes above $80\\%$ with all decision statistics, reaching the top value of $90.0\\%$ when $|\\varDelta^{01}|$ is used. Obviously, this is a very good result, but what makes it especially valuable is the absence of any dependence on the generators' models. This point is further stressed by the fact that the AUC remains extremely stable across all test sets, with a minimum of $65.1\\%$ on GLIDE-R. On the contrary, the best competitor, Corvi2023, has a long score of top results but also some very poor ones. suggesting a certain instability, likely due to the presence/absence of specific artifacts in the test images, and eventually the risk of not adapting to models of new conception. We want also to draw the reader's attention on the already mentioned case of GLIDE and on the fact that the proposed method exhibits wildly different results with different decision statistics. In particular, with $|D^{(0)}|$ the AUC is $89.4\\%$ as opposed to the already mentioned $65.1\\%$ with $|\\varDelta^{01}|$ . This suggests there may be better ways to exploit the basic $\\mathrm{NLL}^{(l)}$ and $H^{(l)}$ , possibly jointly at all levels, to synthesize a better and more stable decision statistics.", + "bbox": [ + 217, + 582, + 785, + 808 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Finally, in Fig.7, we report the accuracy as a function of the decision threshold for the best methods. A separate curve is shown for each real image dataset by", + "bbox": [ + 215, + 809, + 784, + 839 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/4e21d0031c615837cb8a5a64ab07a2ce4aa27497ada7559bbcb459410c5ad7c3.jpg", + "image_caption": [ + "Fig. 7: Balanced accuracy as a function of the detection threshold. For each dataset of real images, we average accuracy over all associated synthetic generators. The dotted vertical line indicates the global optimal threshold and the $\\times$ symbol the corresponding accuracy. Note that only for the proposed method all peaks are very close, indicating the presence of a single threshold. Charts for other methods are reported in the Suppl." + ], + "image_footnote": [], + "bbox": [ + 222, + 143, + 359, + 226 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/0e71efdcbd0b70bafef580e3faab45897e7b5dda058041bfe72d36967d5b3a51.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 364, + 143, + 500, + 226 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/4ee059c975ba3f9c60be3ad22d7d0186c303ed6ae9a73cf9cec89ad584c1662e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 143, + 640, + 226 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/c16b149cb21ba2a87c32a0e78158ea1361ad650288d9756676f79ea09e97e8e5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 643, + 143, + 782, + 226 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/8b60c82f1664d71b2d47eca0399985d53656884a7ba43874bcc073cab300070c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 303, + 229, + 697, + 244 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "averaging over the associated synthetic generators. Unlike AUC, the accuracy critically depends on the selection of a good threshold and some calibration data may be needed for this purpose. Note that only for the proposed method there is a single good threshold that ensures near-optimal accuracy for all datasets.", + "bbox": [ + 212, + 354, + 787, + 415 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "4.4 Limitations", + "text_level": 1, + "bbox": [ + 214, + 434, + 356, + 449 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Our work was developed to detect whether an image has been fully generated and not to detect local manipulations. However, it could be easily extended to accomplish this task since we already compute a map of local pixel-wise statistics. Furthermore, our approach relies on a model of the real class learned by the encoder. If real images do not satisfy this model, the approach may not perform correctly. For example, if images are highly compressed or resized (as is the case on the web), statistical analysis may not be reliable.", + "bbox": [ + 212, + 455, + 787, + 564 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 214, + 584, + 359, + 599 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "We introduced a novel zero-shot forensic detector to distinguish AI-generated images from real ones. Unlike most current methods, our approach does not require fake images during training, which ensures generalization to yet unknown generative models. The idea is to exploit an implicit model of real images and classify off-model images as synthetic. To this end, we leverage an appropriate lossless encoder, trained only on real images, that can predict the probability distribution of each pixel given its context. Synthetic images are expected to not respect this distribution, thus revealing their artificial nature. Our experiments show that the proposed detector is consistently competitive with detectors trained in supervised modality, and outperforms them in terms of generalization ability. We believe that our approach is an important stepping stone towards effective forensic tools that can operate without relying on domain- or method-specific training data. Future work will focus on making the method robust to the most common forms of image impairment, so as to make it suitable for in the wild application.", + "bbox": [ + 212, + 613, + 787, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 377, + 127 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgments. We gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Gift. This material is also based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. In addition, this work has received funding by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093.", + "bbox": [ + 212, + 146, + 787, + 328 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 215, + 353, + 321, + 369 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Albright, M., McCloskey, S.: Source Generator Attribution via Inversion. In: CVPR Workshop. pp. 96-103 (2019)", + "2. Amoroso, R., Morelli, D., Cornia, M., Baraldi, L., Del Bimbo, A., Cucchiara, R.: Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images. ACM Trans. Multimedia Comput. Commun. Appl. (2024)", + "3. Bammey, Q.: Synthbuster: Towards Detection of Diffusion Model Generated Images. IEEE Open Journal of Signal Processing (2023)", + "4. Boháček, M., Farid, H.: A geometric and photometric exploration of GAN and Diffusion synthesized faces. In: CVPR Workshop. pp. 874--883 (2023)", + "5. Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis. In: ICLR (2018)", + "6. Cao, S., Wu, C.Y., Krahenbuhl, P.: Lossless Image Compression through SuperResolution. arXiv preprint arXiv:2004.02872v1 (2020)", + "7. Chai, L., Bau, D., Lim, S.N., Isola, P.: What Makes Fake Images Detectable? Understanding Properties that Generalize. In: ECCV. pp. 103-120 (2020)", + "8. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: CVPR. pp. 8789-8797 (2018)", + "9. Corvi, R., Cozzolino, D., Poggi, G., Nagano, K., Verdoliva, L.: Intriguing properties of synthetic images: from generative adversarial networks to diffusion models. In: CVPR Workshop. pp. 973-982 (2023)", + "0. Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K., Verdoliva, L.: On the detection of synthetic images generated by diffusion models. In: ICASSP. pp. 1-5 (2023)", + "1. Cozzolino, D., Poggi, G., Corvi, R., Nießner, M., Verdoliva, L.: Raising the Bar of AI-generated Image Detection with CLIP. In: CVPR Workshop. pp. 4356-4366 (2024)", + "12. Cozzolino, D., Thies, J., Rössler, A., Riess, C., Nießner, M., Verdoliva, L.: Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510 (2018)", + "13. Dang-Nguyen, D.T., Pasquini, C., Conotter, V., Boato, G.: RAISE: A Raw Images Dataset for Digital Image Forensics. In: ACM MMSys. p. 219-224 (2015)" + ], + "bbox": [ + 218, + 388, + 784, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "14. Dayma, B., Patil, S., Cuenca, P., Saifullah, K., Abraham, T., Lé Khac, P., Melas, L., Ghosh, R.: DALL-E Mini (2021). https://doi.org/10.5281/zenodo.5146400, https://github.com/borisdayma/dalle-mini", + "15. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR. pp. 248-255 (2009)", + "16. Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. NeurIPS 34, 8780-8794 (2021)", + "17. Du, M., Pentyala, S., Li, Y., Hu, X.: Towards Generalizable Deepfake Detection with Locality-Aware AutoEncoder. In: CIKM. pp. 325--334 (2020)", + "18. Durall, R., Keuper, M., Keuper, J.: Watch Your Up-Convolution: CNN Based Generative Deep Neural Networks Are Failing to Reproduce Spectral Distributions. In: CVPR. pp. 7890-7899 (2020)", + "19. Epstein, D.C., Jain, I., Wang, O., Zhang, R.: Online Detection of AI-Generated Images. In: ICCV Workshop. pp. 382-392 (2023)", + "20. Epstein, Z., Hertzmann, A., Herman, L., Mahari, R., Frank, M.R., Groh, M., Schroeder, H., Akten, A.S.M., Fjeld, J., Farid, H., Leach, N., Pentland, A.S., Russakovsky, O.: Art and the science of generative AI: A deeper dive. arXiv preprint arXiv:2306.04141 (2023)", + "21. Farid, H.: Lighting (in) consistency of paint by text. arXiv preprint arXiv:2207.13744 (2022)", + "22. Farid, H.: Perspective (in) consistency of paint by text. arXiv preprint arXiv:2206.14617 (2022)", + "23. Firefly, A.: https://www.adobe.com/sensei/generative-ai/firefly.html (2023)", + "24. Frank, J., Eisenhofer, T., Schonherr, L., Fischer, A., Kolossa, D., Holz, T.: Leveraging Frequency Analysis for Deep Fake Image Recognition. In: ICML. pp. 3247-3258 (2020)", + "25. Gehrmann, S., Strobelt, H., Rush, A.M.: GLTR: Statistical detection and visualization of generated text. In: 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. pp. 111-116 (2019)", + "26. Ghosal, S.S., Chakraborty, S., Geiping, J., Huang, F., Manocha, D., Bedi, A.S.: Towards possibilities & impossibilities of AI-generated text detection: A survey. arXiv preprint arXiv:2310.15264 (2023)", + "27. Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdolina, L.: Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In: ICME. pp. 1-6 (2021)", + "28. Grommelt, P., Weiss, L., Pfreundt, F.J., Keuper, J.: Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets. arXiv preprint arXiv:2403.17608 (2024)", + "29. Hans, A., Schwarzschild, A., Cherepanova, V., Kazemi, H., Saha, A., Goldblum, M., Geiping, J., Goldstein, T.: Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text. In: ICML (2024)", + "30. He, Z., Chen, P.Y., Ho, T.Y.: RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection. arXiv preprint arXiv:2405.20112 (2024)", + "31. Heikkilä, M.: This artist is dominating AI-generated art. and he's not happy about it. MIT Technology Review (2022)", + "32. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. NeurIPS 33, 6840-6851 (2020)", + "33. Jeon, H., Bang, Y.O., Kim, J., Woo, S.: T-GD: Transferable GAN-generated Images Detection Framework. In: ICML. vol. 119, pp. 4746-4761 (2020)" + ], + "bbox": [ + 215, + 146, + 784, + 840 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 375, + 127 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "34. Jeong, Y., Kim, D., Ro, Y., Kim, P., Choi, J.: Fingerprint Net: Synthesized Fingerprints for Generated Image Detection. In: ECCV. pp. 76-94 (2022)", + "35. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: CVPR. pp. 10124-10134 (2023)", + "36. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. In: ICLR (2018)", + "37. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR. pp. 4401-4410 (2019)", + "38. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: CVPR. pp. 8110-8119 (2020)", + "39. Konstantinov, M., Shonenkov, A., Bakshandaeva, D., Schuhmann, C., Ivanova, K., Klokova, N.: https://www deepfloyd.ai/deepfloyd-if (2023)", + "40. Krasin, I., Duerig, T., Alldrin, N., Ferrari, V., Abu-El-Haija, S., Kuznetsova, A., Rom, H., Uijlings, J., Popov, S., Veit, A., et al.: OpenImages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages (2017)", + "41. Lin, L., Gupta, N., Zhang, Y., Ren, H., Liu, C.H., Ding, F., Wang, X., Li, X., Verdoliva, L., Hu, S.: Detecting multimedia generated by large ai models: A survey. arXiv preprint arXiv:2204.06125 (2024)", + "42. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. pp. 740-755 (2014)", + "43. Liu, B., Yang, F., Bi, X., Xiao, B., Li, W., Gao, X.: Detecting generated images by real images. In: ECCV. pp. 95-110 (2022)", + "44. Liu, H., Tan, Z., Tan, C., Wei, Y., Wang, J., Zhao, Y.: Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection. In: CVPR. pp. 10770-10780 (2024)", + "45. Mahajan, S., Roth, S.: PixelPyramids: Exact Inference Models from Lossless Image Pyramids. In: ICCV. pp. 6639-6648 (2021)", + "46. Mandelli, S., Bonettini, N., Bestagini, P., Tubaro, S.: Detecting GAN-generated Images by Orthogonal Training of Multiple CNNs. In: ICIP. pp. 3091-3095 (2022)", + "47. Marra, F., Saltori, C., Boato, G., Verdoliva, L.: Incremental learning for the detection and classification of GAN-generated images. In: WIFS. pp. 1-6 (2019)", + "48. Midjourney: https://www.midjourney.com/home (2023)", + "49. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C.D., Finn, C.: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. In: ICML. pp. 24950-24962 (2023)", + "50. Nichol, A.Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., Chen, M.: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diff. Models. In: ICML. pp. 16784-16804 (2022)", + "51. Ojha, U., Li, Y., Lee, Y.J.: Towards universal fake image detectors that generalize across generative models. In: CVPR. pp. 24480-24489 (2023)", + "52. OpenAI: https://openai.com/dall-e-3 (2023)", + "53. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR. pp. 2337-2346 (2019)", + "54. Peebles, W., Xie, S.: Scalable diffusion models with transformers. In: ICCV. pp. 4195-4205 (2023)", + "55. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: SDXL: Improving latent diffusion models for high-resolution image synthesis. In: ICLR (2024)" + ], + "bbox": [ + 212, + 146, + 784, + 839 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "56. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML. pp. 8748-8763 (2021)", + "57. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022)", + "58. Reed, S.E., van den Oord, A., Kalchbrenner, N., Colmenarejo, S.G., Wang, Z., Chen, Y., Belov, D., de Freitas, N.: Parallel multiscale autoregressive density estimation. In: ICML. pp. 2912-2921 (2017)", + "59. Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the detection of diffusion model deepfakes. In: VISAPP. pp. 446-457 (2024)", + "60. Ricker, J., Lukovnikov, D., Fischer, A.: AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error. In: CVPR. pp. 9130-9140 (2024)", + "61. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR. pp. 10684-10695 (2022)", + "62. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/CompVis/stable-diffusion (2022)", + "63. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/Stability-AI/stablediffusion (2022)", + "64. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M.: Faceforensics++: Learning to detect manipulated facial images. In: ICCV. pp. 1-11 (2019)", + "65. Sarkar, A., Mai, H., Mahapatra, A., Lazebnik, S., Forsyth, D.A., Bhattad, A.: Shadows Don't Lie and Lines Can't Bend! Generative Models don't know Projective Geometry... for now. In: CVPR. pp. 28140-28149 (2024)", + "66. Schuhmann, C., Kaczmarczyk, R., Komatsuzaki, A., Katta, A., Vencu, R., Beaumont, R., Jitsev, J., Coombes, T., Mullis, C.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In: NeurIPS (2021)", + "67. Sha, Z., Li, Z., Yu, N., Zhang, Y.: DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. In: ACM SIGSAC. pp. 3418-3432 (2023)", + "68. Sinitsa, S., Fried, O.: Deep Image Fingerprint: Towards Low Budget Synthetic Image Detection and Model Lineage Analysis. In: WACV. pp. 4067-4076 (2024)", + "69. Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J.W., Kreps, S., et al.: Release Strategies and the Social Impacts of Language Models. arXiv preprint arXiv:1908.09203 (2019)", + "70. Su, J., Zhuo, T.Y., Wang, D., Nakov, P.: DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text. In: Conference on Empirical Methods in Natural Language Processing (2023)", + "71. Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., Wei, Y.: Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection. In: CVPR. pp. 28130-28139 (2024)", + "72. Tan, C., Zhao, Y., Wei, S., Gu, G., Wei, Y.: Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection. In: CVPR. pp. 12105-12114 (2023)", + "73. Tao, M., Bao, B.K., Tang, H., Xu, C.: Galip: Generative adversarial clips for text-to-image synthesis. In: CVPR. pp. 14214-14223 (2023)", + "74. Wang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now. In: CVPR. pp. 8692-8701 (2020)" + ], + "bbox": [ + 215, + 146, + 785, + 840 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Cozzolino et al.", + "bbox": [ + 271, + 114, + 375, + 127 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "75. Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. ICCV pp. 22445-22455 (2023)", + "76. Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. In: ICLR (2023)", + "77. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)", + "78. Zhang, X., Karaman, S., Chang, S.F.: Detecting and Simulating Artifacts in GAN Fake Images. In: WIFS. pp. 1-6 (2019)", + "79. Zhong, N., Xu, Y., Qian, Z., Zhang, X.: Rich and Poor Texture Contrast: A Simple yet Effective Approach for AI-generated Image Detection. arXiv preprint arXiv:2311.12397v1 (2023)" + ], + "bbox": [ + 212, + 146, + 785, + 313 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "Zero-Shot Detection of AI-Generated Images", + "bbox": [ + 431, + 114, + 730, + 128 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 18 + } +] \ No newline at end of file diff --git a/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_model.json b/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_model.json new file mode 100644 index 0000000000000000000000000000000000000000..26035aea80060637c5ac64c71a974a3b32f6f977 --- /dev/null +++ b/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_model.json @@ -0,0 +1,2636 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.244, + 0.142, + 0.759, + 0.163 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "text", + "bbox": [ + 0.263, + 0.189, + 0.741, + 0.22 + ], + "angle": 0, + "content": "Davide Cozzolino\\(^{1}\\), Giovanni Poggi\\(^{1}\\), Matthias Nießner\\(^{2}\\), and Luisa Verdoliva\\(^{1,2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.232, + 0.684, + 0.248 + ], + "angle": 0, + "content": "1 University Federico II of Naples, 80125 Naples, Italy" + }, + { + "type": "text", + "bbox": [ + 0.272, + 0.248, + 0.732, + 0.275 + ], + "angle": 0, + "content": "2 Technical University of Munich, 85748 Garching, Germany {davide.cozzolino, poggi, verdoliv}@unina.it, niessner@tum.de" + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.304, + 0.744, + 0.61 + ], + "angle": 0, + "content": "Abstract. Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALL-E, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image. Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than \\(3\\%\\) over the SoTA in terms of accuracy. Code is available at https://grip-unina.github.io/ZED/." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.631, + 0.376, + 0.646 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.659, + 0.788, + 0.842 + ], + "angle": 0, + "content": "The quality of AI-generated images has improved tremendously in recent years, to the point where they are virtually indistinguishable from real images upon visual inspection. In addition, the latest generators are widely available online and allow easy creation and retouching of images based on simple textual prompts. All this opens the way to endless application opportunities in a variety of fields, from the creative arts to industries of all kinds. However, on the flip side, such tools can be also used for malicious purposes, thus posing serious threats to our society. For example, pre-trained generators can be easily optimized to generate fake works by a specific artist [31], or used to orchestrate effective, large-scale disinformation campaigns to influence public opinion in advanced democracies [20]. These immediate risks create an urgent need for reliable and automated detection of AI-generated images [41]." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.228, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.116, + 0.378, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "image", + "bbox": [ + 0.224, + 0.147, + 0.585, + 0.257 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.617, + 0.146, + 0.782, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.271, + 0.789, + 0.368 + ], + "angle": 0, + "content": "Fig. 1: ZED leverages the intrinsic model of real images learned by a state-of-the-art lossless image coder. For real images, the model is correct and the actual coding cost is close its expected value. Synthetic images have different statistics than real images, so they \"surprise\" the encoder, and the actual coding cost differs significantly from its expected value. This is evident from the graphic on the right that shows how the coding cost gap increases for synthetic images much more than for real ones when predicting high resolution details from low resolution data." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.385, + 0.789, + 0.672 + ], + "angle": 0, + "content": "Until very recently, supervised learning paradigms dominated the image forensics community, with deep models trained on large datasets of real and fake images [64]. These approaches, however, are tailored to specific domains and are difficult to generalize to unseen deepfake samples. In the seminal paper by Wang et al. [74], it is shown that a simple detector trained only on ProGAN images from 20 different categories generalizes well to other images created by different generative adversarial networks (GAN) thanks to suitable augmentation. However, performance still suffers on images generated by prompt-driven diffusion models (DM). Similarly, a detector suitably trained on Latent DM images performs well on all other DM images but fails to generalize properly on GAN images [10]. To reduce the dependence on training data, recent works [2, 11, 51, 67] rely on general-purpose features extracted by pre-trained visual-language models, such as CLIP (Contrastive Language-Image Pre-Training) [56]. Despite the good performance, these methods still depend on the choice of the training dataset. A recent trend to improve generalization is based on few-shot methods [12, 17, 33] which can partially solve the problem, but still require some prior knowledge of the target models, even if limited to a few images. With this work we make a step further and develop an approach that is not influenced at all by newer and previously unseen generative models." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.674, + 0.79, + 0.842 + ], + "angle": 0, + "content": "To this end, we propose a zero-shot detection method that only requires real images for learning their underlying distribution. Our key idea is to use lossless coding and a multi-resolution prediction strategy for computing conditional distributions of all image pixels at three different levels of resolution. Given such distributions, we compute statistics related to the actual and expected coding cost. If the image is coherent with the predicted distribution (no surprise), then there is no mismatch and the image under analysis is labelled as real. We expect synthetic images to be characterized by a higher coding cost under the distribution of real images (see Fig. 1). Based on this intuition, we design discriminative features that measure how well the image under test fits the model of real images embedded in the encoder. Even by using a single feature, we can obtain" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.146, + 0.784, + 0.176 + ], + "angle": 0, + "content": "significant performance above \\(95\\%\\) in terms of AUC for several recent models, such as DALL·E, Midjourney, and SDXL." + }, + { + "type": "text", + "bbox": [ + 0.239, + 0.178, + 0.725, + 0.193 + ], + "angle": 0, + "content": "In summary, the main contributions of this paper are the following:" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.206, + 0.784, + 0.249 + ], + "angle": 0, + "content": "- we propose a zero-shot detector of artificially generated images: no fake images are necessary for training which guarantees independence from any specific generation method;" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.252, + 0.784, + 0.28 + ], + "angle": 0, + "content": "- this is the first work that exploits an implicit model of real images, learnt for lossless encoding to address image forensics task;" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.283, + 0.784, + 0.326 + ], + "angle": 0, + "content": "- our experiments show on a wide variety of generative models that even using a single feature the proposed detector provides state-of-the-art results \\((+3.4\\%\\) in terms of accuracy)." + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.206, + 0.784, + 0.326 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.355, + 0.382, + 0.37 + ], + "angle": 0, + "content": "2 Related work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.379, + 0.787, + 0.59 + ], + "angle": 0, + "content": "Supervised learning. The problem of distinguishing synthetic images from real ones is commonly formulated as a binary classification task. State-of-the-art methods explicitly or implicitly exploit forensic artifacts by leveraging a large amount of real and generated images. Some of them rely on semantic flaws, such as face asymmetries [4] or incorrect perspective, lighting, shadows [21, 22, 65]. However, technology advances very quickly and such errors will very likely disappear in next-generation tools. Therefore, most methods focus on low-level and inconspicuous artifacts [9, 18]. Major efforts have been made to prevent conventional supervised detectors from overfitting the training data. Popular recipes include using datasets as varied as possible with intense augmentation [74], pre-training models on large general-purpose datasets [46], preserving fine-grain details of images [7, 27], exploiting high-frequency artifacts in the spatial [43, 68, 72] or Fourier domain [18, 24, 78], leveraging inter-pixel correlation discrepancies [71, 79], adopting inversion techniques [1, 75]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.592, + 0.787, + 0.727 + ], + "angle": 0, + "content": "With the advent of diffusion models that presents significant architectural differences with GANs, the importance to design methods that work equally well on known and unknown sources became even more evident [10]. An important finding was the increased generalization that could be achieved using pre-trained large vision-language models, such as CLIP-ViT [51]. In this case only a lightweight linear classifier is trained on top of these features to adapt to the forensic task. Very good performance is obtained on DMs even if the network was trained only on GANs. Other methods also show the potential of such approach [2, 11, 59], sometimes including multimodal features [44, 67]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.728, + 0.787, + 0.849 + ], + "angle": 0, + "content": "Some supervised methods assume to have only real images available and create the synthetic images needed for training by simulating the artifacts introduced by a generator, for example by passing real images through an autoencoder [24,34,78]. The more generative architectures are simulated, the more effective is the detector. Of course, the performance degrades on images generated by an architecture not considered in the simulation phase. Differently from all these methods our approach does not require collecting or generating synthetic images thus avoiding any type of dependence on this class." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.378, + 0.129 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.299 + ], + "angle": 0, + "content": "Few-shot/incremental learning. A significant step towards improved generalization is the use of few-shot or incremental learning strategies [12, 17, 33, 47]. Along this path, a recent work [19] proposes to regularly re-train a detector on new synthetic generators in the very same temporal order of their release, as in a real-world scenario. Results show a good generalization to unseen models, but only as long as the architecture of new generators is similar to that of old ones. Although few-shot methods represent an important progress in reducing the dependence on training data, the ultimate goal is to remove this dependence entirely to ensure maximum generalization. In pursuit of this goal, in this work we propose a truly zero-shot detector." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.311, + 0.789, + 0.614 + ], + "angle": 0, + "content": "Zero-shot learning. Only a few very recent papers avoid training on synthetic data altogether. A solution was proposed in [60] based on the observation that synthetic images are reconstructed more accurately than real images by a latent DM autoencoder. The main limitation is that the method only reliably detects images generated by latent diffusion models. The method in [30], instead, exploits the fact that small perturbations of [real/synthetic] images correspond to [small/large] variations in the embedding space of a pre-trained large model. Differently from these strategies our work takes inspiration from some interesting proposals that have recently appeared for synthetic text detection [25,29,49,69]. They exploit the fact that LLMs (Large Language Models) work by generating the probability distribution of the next token given the previous ones. In the generation phase, new tokens are sequentially added to a sentence based on these distributions. In the analysis phase, one can replicate the process for a given sentence under test and measure how well the actual tokens match the predicted ones. A good match suggests that the sentence was indeed generated by an LLM. Although inspired by these methods, our zero-shot synthetic image detector differs from them because it leverages a model of real images and does not depend in any way on synthetic data or generators. Moreover, to build the model we take advantage of the remarkable field-proved ability of lossless encoders to accurately describe pixels based on their context." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.637, + 0.331, + 0.654 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.67, + 0.363, + 0.687 + ], + "angle": 0, + "content": "3.1 Background" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.696, + 0.788, + 0.785 + ], + "angle": 0, + "content": "Here we provide some background on zero-shot methods that leverage large pre-trained language models for machine-generated text detection. They exploit the native functionality of these models to provide next-token predictions [29]. Before a string of characters \\( s \\) can be processed by a language model, it must be parsed into a sequence of tokens (mostly words). The tokenizer \\( T \\) outputs a list of indices" + }, + { + "type": "equation", + "bbox": [ + 0.413, + 0.787, + 0.786, + 0.804 + ], + "angle": 0, + "content": "\\[\nT: s \\rightarrow \\left\\{x _ {0}, x _ {1}, \\dots , x _ {L} \\right\\}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.842 + ], + "angle": 0, + "content": "where \\( x_{i} \\in \\{1, \\dots, n\\} \\) is the index of the \\( i \\)-th token of the sequence, addressing a size- \\( n \\) vocabulary of tokens. The language model operates by predicting the next" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.784, + 0.126 + ], + "angle": 0, + "content": "5" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.223 + ], + "angle": 0, + "content": "index-token given the list of previous ones, thereby allowing for the generation of a full sentence given just a short prompt. Actually, language models output more information than just the index of the most likely token. Given the list of previous indices \\( X_{i} = \\{x_{0},\\ldots ,x_{i - 1}\\} \\), they provide the probability of all possible values of the current one, that is, \\( P(x_{i} = k|X_{i}) \\), for \\( k = 1,\\dots ,n \\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.223, + 0.784, + 0.299 + ], + "angle": 0, + "content": "The idea is to exploit this functionality to measure the conformity of the string under analysis to the LLM intrinsic model of language. That is, these methods try to answer the question \"How likely is it that this sentence was generated by my LLM?\" Hence they compute (for free) the likelihood of the given list of indices under the probability distribution learned by the LLM" + }, + { + "type": "equation", + "bbox": [ + 0.217, + 0.31, + 0.784, + 0.365 + ], + "angle": 0, + "content": "\\[\nP \\left(x _ {0}, \\dots , x _ {L}\\right) = P \\left(x _ {0}\\right) \\cdot P \\left(x _ {1} \\mid x _ {0}\\right) \\cdot \\dots \\cdot P \\left(x _ {L} \\mid x _ {0}, \\dots , x _ {L - 1}\\right) = P \\left(x _ {0}\\right) \\prod_ {i = 1} ^ {L} P \\left(x _ {i} \\mid X _ {i}\\right) \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.365, + 0.784, + 0.396 + ], + "angle": 0, + "content": "In practice, the negative log-likelihood (also called log-perplexity) is computed instead, that is (neglecting \\( x_0 \\))" + }, + { + "type": "equation", + "bbox": [ + 0.407, + 0.407, + 0.784, + 0.448 + ], + "angle": 0, + "content": "\\[\n\\mathrm {N L L} = - \\sum_ {i = 1} ^ {L} \\log P \\left(x _ {i} \\mid X _ {i}\\right) \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.459, + 0.785, + 0.565 + ], + "angle": 0, + "content": "If the \\(i\\)-th observed index \\(x_{i}\\) was very likely to come after the previous ones, namely, it is not surprising, its contribution to the NLL is close to 0. On the contrary, if it was unlikely to appear, given the previous ones (an anomaly) it impacts significantly on the NLL. Overall, a sequence with low NLL is likely to have been generated by the LLM, and will be therefore detected as synthetic. Of course, this basic description is only meant to convey the general concepts, the reader is referred to the literature [26] for more details." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.588, + 0.436, + 0.604 + ], + "angle": 0, + "content": "3.2 From Text to Images" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.614, + 0.785, + 0.779 + ], + "angle": 0, + "content": "When we try to translate the above concepts into the realm of images, we run into a big problem: the most effective and popular image generation engines do not provide anything similar to the next token distribution observed in the case of LLMs. Indeed, there exist some autoregressive synthesis methods [45,58] that could be adapted to this task, but their generation approach is very different from those of the most popular GAN- and DM-based methods. Therefore in this work we change perspective or, better said, we now assume the correct one-class perspective, and look for a model of real images, rather than synthetic ones. Armed with such a model, we will be able to decide whether a given image is unsurprising, therefore real, or somewhat anomalous, therefore synthetic, regardless of the specific generation model used to create it." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.785, + 0.841 + ], + "angle": 0, + "content": "Now, the concepts of prediction, surprise, perplexity, along with information measure and entropy, are pervasive in the literature on image coding, part of information theory. Lossless image encoders typically include a predictor that, given a suitable context, estimates the value of the target pixel, and an entropy" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.228, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.377, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.224 + ], + "angle": 0, + "content": "encoder that efficiently represents prediction errors. Indeed, by analyzing the recent literature in the field we managed to single out a tool that perfectly suits our needs, the Super-Resolution based lossless Compressor (SReC) proposed by Cao et al. [6], which provides a computationally lightweight tool for predicting the distribution of image pixels at multiple resolution." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.246, + 0.625, + 0.261 + ], + "angle": 0, + "content": "3.3 Super-resolution based Lossless Compressor" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.271, + 0.789, + 0.409 + ], + "angle": 0, + "content": "Here we present a high-level description of SReC, focusing only on the aspects more relevant for our purposes. The interested reader is referred to the original paper for details [6]. The general idea is to train a neural network to predict the current pixel, \\( x_{i,j} \\), given a set of previously coded pixels, and encode the difference between the true pixel value and its prediction. However, this purely autoregressive formulation is highly impractical, as it implies long encoding/decoding times. Therefore, SReC uses a multi-resolution prediction strategy. A low-resolution version \\( y^{(1)} \\) of the original image \\( x^{(0)} \\) is built through \\( 2\\times 2 \\) average pooling, that is" + }, + { + "type": "equation", + "bbox": [ + 0.335, + 0.418, + 0.786, + 0.455 + ], + "angle": 0, + "content": "\\[\ny _ {i, j} ^ {(1)} = \\frac {x _ {2 i , 2 j} ^ {(0)} + x _ {2 i + 1 , 2 j} ^ {(0)} + x _ {2 i , 2 j + 1} ^ {(0)} + x _ {2 i + 1 , 2 j + 1} ^ {(0)}}{4} \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.462, + 0.788, + 0.54 + ], + "angle": 0, + "content": "Then, each four-pixel group of the high-resolution image is predicted based only on the low-resolution image, independent of other groups at the same resolution level, allowing for parallel processing and high-speed encoding. Since the fourth pixel of a group is known, given the other three and the low resolution image, the conditional joint distribution of the group reads" + }, + { + "type": "equation", + "bbox": [ + 0.236, + 0.549, + 0.786, + 0.594 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} P \\left(x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, x _ {2 i, 2 j + 1} ^ {(0)} \\mid Y _ {i, j} ^ {(1)}\\right) = P \\left(x _ {2 i, 2 j} ^ {(0)} \\mid Y _ {i, j} ^ {(1)}\\right) \\cdot P \\left(x _ {2 i + 1, 2 j} ^ {(0)} \\mid x _ {2 i, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}\\right) \\tag {5} \\\\ \\cdot P (x _ {2 i, 2 j + 1} ^ {(0)} | x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}) \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.607, + 0.788, + 0.675 + ], + "angle": 0, + "content": "where \\( Y_{i,j}^{(1)} \\) is the relevant context in the lower resolution image, that is a receptive field centered on \\( y_{i,j}^{(1)} \\). Each term in this factorization is estimated by a dedicated convolutional neural network (CNN). In particular, a parametric distribution is assumed, given by the mixture of \\( K \\) discrete logistic distributions," + }, + { + "type": "equation", + "bbox": [ + 0.38, + 0.686, + 0.786, + 0.728 + ], + "angle": 0, + "content": "\\[\nP (x | X) = \\sum_ {k = 1} ^ {K} w _ {k} \\operatorname {l o g i s t i c} \\left(x \\mid \\mu_ {k}, s _ {k}\\right) \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.733, + 0.788, + 0.843 + ], + "angle": 0, + "content": "where \\(\\mathrm{logistic}(x|\\mu, s) = \\sigma\\left(\\frac{x - \\mu + 0.5}{s}\\right) - \\sigma\\left(\\frac{x + \\mu + 0.5}{s}\\right)\\) is the difference of two sigmoid functions, with position parameter \\(\\mu\\) and scale parameter \\(s\\), and \\(K = 10\\) is always assumed. The CNN takes the context \\(X\\) of the pixel of interest as input and outputs the weights of the mixture together with the position and scale parameters of all logistics. In turn, these parameters allow one to compute the desired distribution. This whole process is replicated on two more lower-resolution scales, for a total of four levels, the lowest resolution, an \\(8 \\times 8\\) subsampled \"prompt\"" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.126 + ], + "angle": 0, + "content": "7" + }, + { + "type": "image_caption", + "bbox": [ + 0.248, + 0.203, + 0.262, + 0.222 + ], + "angle": 0, + "content": "Reale" + }, + { + "type": "image", + "bbox": [ + 0.264, + 0.149, + 0.77, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.365, + 0.788, + 0.45 + ], + "angle": 0, + "content": "Fig. 2: NLL and Entropy. We compute the spatial distribution of NLL and Entropy at three resolutions. For real images (top) the paired maps are very similar at all scales: when the uncertainty on a pixel (entropy) grows, also the coding cost (NLL) does. Therefore, the NLL-Entropy difference maps are all very dark. For synthetic images (bottom) NLL and Entropy maps are not always similar, because the model is not correct, and hence the difference maps are brighter, especially the high-resolution map." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.461, + 0.788, + 0.537 + ], + "angle": 0, + "content": "image, coded in clear, and three higher resolution images, each one predicted from its lower resolution version. All networks are trained to minimize the cross entropy between the predicted model probability \\( P_{\\theta}(x) \\) and the empirical data distribution \\( P(x) \\) given by the training image dataset. We mention in passing that this loss is closely related to the log-perplexity considered for text synthesis." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.538, + 0.788, + 0.643 + ], + "angle": 0, + "content": "To summarize, SReC provides us with a lightweight tool for computing conditional distributions of all image pixels at three different levels of resolution, and therefore to compute all kinds of statistics that can expose the mismatch between a test image and the learned model. Considering that SReC achieves state-of-the-art performance in lossless image compression, one can also argue that the learned model of real images is very accurate. Given this tool, we can now design a zero-shot detector of synthetic images." + }, + { + "type": "title", + "bbox": [ + 0.214, + 0.664, + 0.525, + 0.679 + ], + "angle": 0, + "content": "3.4 Features and Decision Statistics" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.689, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Let \\( x \\in \\{0, \\ldots, 255\\}^{N \\times M \\times 3} \\) be the image under test. In our multi-resolution framework, this will be the highest-resolution version, \\( x^{(0)} = x \\). Through \\( 2 \\times 2 \\) average pooling, we generate a lower resolution version \\( y^{(1)} = \\mathrm{avpool}(x^{(0)}) \\), and then, through rounding, its integer-valued version \\( x^{(1)} = \\mathrm{round}(y^{(1)}) \\). The process is repeated, and eventually we have four integer versions of the image \\( \\{x^{(0)}, x^{(1)}, x^{(2)}, x^{(3)}\\} \\), together with three non-integer versions \\( \\{y^{(1)}, y^{(2)}, y^{(3)}\\} \\). In the context of lossless coding, the lowest resolution version, \\( x^{(3)} \\), must be sent in clear together with the rounding bits at levels 3, 2, and 1, but we mention this only for completeness and for a more compelling interpretation of results. The CNNs trained on real images provide the predicted probability distribution" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.228, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.378, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.147, + 0.787, + 0.281 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.294, + 0.789, + 0.379 + ], + "angle": 0, + "content": "Fig. 3: Extracting decision statistics. The full resolution image \\( x^{(0)} \\) is downsampled three times. The lowest-resolution version, \\( x^{(3)} \\), feeds the level-2 CNN, which outputs the probability distributions of level-2 pixels. These distributions, together with the actual level-2 pixels, are used to compute the level-2 coding cost \\( \\mathrm{NLL}^{(2)} \\) and its expected value \\( H^{(2)} \\). All these steps are then repeated for levels 1 and 0. Eventually, NLLs and entropies are combined to compute the decision statistics." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.388, + 0.461, + 0.404 + ], + "angle": 0, + "content": "for all pixels\\(^3\\) of levels 0, 1, and 2" + }, + { + "type": "equation", + "bbox": [ + 0.441, + 0.411, + 0.787, + 0.434 + ], + "angle": 0, + "content": "\\[\nP \\left(x _ {i, j} ^ {(l)} = k \\mid X _ {i, j} ^ {(l)}\\right) \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.441, + 0.788, + 0.508 + ], + "angle": 0, + "content": "where \\( k \\in \\{0, \\dots, 255\\} \\) and \\( X_{i,j}^{(l)} \\) is the context for pixel \\( x_{i,j}^{(l)} \\), including a portion of the lower-resolution image \\( y^{(l+1)} \\) and possibly some same-resolution neighbors of the current pixel. Given the above distribution, we compute the negative log likelihood and the entropy at each pixel" + }, + { + "type": "equation", + "bbox": [ + 0.36, + 0.515, + 0.785, + 0.571 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathrm {N L L} _ {i, j} ^ {(l)} = - \\log P (x _ {i, j} ^ {(l)} | X _ {i, j} ^ {(l)}) \\\\ H _ {i, j} ^ {(l)} = - \\sum_ {k} P (k | X _ {i, j} ^ {(l)}) \\log P (k | X _ {i, j} ^ {(l)}) \\tag {8} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.578, + 0.788, + 0.655 + ], + "angle": 0, + "content": "These quantities are shown in Fig.2 for two sample images, real and synthetic. Then, through spatial averaging, we obtain the corresponding quantities for the images at all resolution levels \\(\\mathrm{NLL}^{(l)} = \\langle \\mathrm{NLL}_{i,j}^{(l)}\\rangle\\) and \\(H^{(l)} = \\langle H_{i,j}^{(l)}\\rangle\\), for \\(l = 0,1,2\\). These are the features associated by the system to input image and our decision statistics will be suitable combinations of them." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.656, + 0.789, + 0.807 + ], + "angle": 0, + "content": "Before going on, it is convenient to give a physical interpretation of these quantities. Each NLL can be interpreted as the actual coding cost for the corresponding image. While each entropy can be interpreted as the expected value of the coding cost given the context, when the image is coherent with the predicted distribution. In the presence of a mismatch, \\(\\mathrm{NLL} - H > 0\\), on the average, with a gap that increases with increasing distribution mismatch. Our fundamental assumption is that the trained CNNs provide a good model of real images, and synthetic images tend not to follow the same model. Therefore, we expect that synthetic images are characterized by higher coding cost, hence higher NLL, under this distribution. This observation would lead us to use the NLLs as decision" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.811, + 0.788, + 0.842 + ], + "angle": 0, + "content": "3 More precisely, all color components of all pixels, but to simplify notations, in the following we will neglect color and treat the image as if grayscale." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.298 + ], + "angle": 0, + "content": "statistics. However, the coding cost does not depend only on the distribution mismatch but also (predominantly) on the intrinsic information content of the image, measured by the entropy. A complex image, say a photo of a crowd, is more difficult to encode/describe than a smooth image, say a blue sky, no matter what model we use. Therefore, to get rid of this bias, we consider the coding cost gap, defined as the difference \\( D^{(l)} = \\mathrm{NLL}^{(l)} - H^{(l)} \\), as decision statistic. Hence, for each image, we have three basic decision statistics, one for each resolution level. It is worth observing that some forms of normalization are adopted for machine generated text detection as well [29, 49, 70]. A block diagram of our method is shown in Fig.3." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.299, + 0.789, + 0.526 + ], + "angle": 0, + "content": "A sample graph of the coding cost gap is shown in Fig.1, on the right. For real images and three families of synthetic images we report the average gap (solid line) plus/minus its standard deviation (colored band) for the various resolutions levels. Two important observations can be made. First of all, the level-0 coding cost gap, concerning the full resolution image, seems to be much more discriminant than the others. Moreover, the gap grows much faster for synthetic images than for real images when going from level 1 to level 0. Therefore, as decision statistics we will consider both \\( D^{(0)} \\) (the level-0 coding cost gap) and \\( \\Delta^{01} = D^{(0)} - D^{(1)} \\) (its slope). In addition, in preliminary experiments we observed that synthetic images are sometimes characterized by a coding cost much lower rather than much higher than expected, that is the NLL is much lower than the entropy. This is also an anomaly, which signals the likely synthetic nature of the image. Therefore, besides the above statistics we also consider their absolute values \\( |D^{(0)}| \\) and \\( |\\Delta^{(01)}| \\). These observations are supported by the sample graphical analysis shown in Fig.5 in the ablation study." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.546, + 0.325, + 0.561 + ], + "angle": 0, + "content": "4 Results" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.576, + 0.44, + 0.59 + ], + "angle": 0, + "content": "4.1 Datasets and Metrics" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.599, + 0.789, + 0.844 + ], + "angle": 0, + "content": "We benchmarked our model on a large variety of synthetic generators both GANs and DMs: GauGAN [53], BigGAN [5], StarGAN [8], StyleGAN2 [38], DiffusionGAN [76], GigaGAN [35], GALIP [73], DDPM [32], ADM [16], GLIDE [50], Stable Diffusion [62, 63], DiT [54], DeepFloyd-IF [39], Stable Diffusion XL [55], DALL-E [14], DALL-E 2 [57], DALL-E 3 [52], Midjourney V5 [48], and Adobe Firefly [23]. We collected images from publicly available datasets [3,10,51,74] and generated additional images as needed when they were not publicly available. We ensured that all datasets included pristine and synthetic images with similar semantic content, both compressed and uncompressed, to avoid any kind of bias (see Fig.4). For some synthetic generators we have multiple datasets, built on the basis of different real image datasets LSUN [77], FFHQ [37], ImageNet [15], COCO [42], LAION [66] and RAISE [13]. This is a fortunate circumstance: we kept them carefully separate as this allows us to analyze how the performance of a detector depends on the class of real images used in the synthesis phase. Overall we used a total of \\(29\\mathrm{k}\\) synthetic images and \\(6\\mathrm{k}\\) real images. More details on the generated and actual images are provided in the supplementary material." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.378, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "image_caption", + "bbox": [ + 0.315, + 0.148, + 0.342, + 0.156 + ], + "angle": 0, + "content": "LSUN" + }, + { + "type": "image", + "bbox": [ + 0.274, + 0.157, + 0.383, + 0.243 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.431, + 0.148, + 0.458, + 0.156 + ], + "angle": 0, + "content": "FFHQ" + }, + { + "type": "image", + "bbox": [ + 0.389, + 0.157, + 0.501, + 0.243 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.536, + 0.148, + 0.584, + 0.157 + ], + "angle": 0, + "content": "ImageNet" + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.157, + 0.615, + 0.243 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.66, + 0.148, + 0.689, + 0.156 + ], + "angle": 0, + "content": "COCO" + }, + { + "type": "image", + "bbox": [ + 0.619, + 0.157, + 0.729, + 0.243 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.274, + 0.245, + 0.383, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.295, + 0.331, + 0.362, + 0.34 + ], + "angle": 0, + "content": "Diffusion-GAN" + }, + { + "type": "image", + "bbox": [ + 0.389, + 0.245, + 0.5, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.419, + 0.331, + 0.47, + 0.34 + ], + "angle": 0, + "content": "StyleGAN2" + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.245, + 0.615, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.551, + 0.331, + 0.569, + 0.34 + ], + "angle": 0, + "content": "DiT" + }, + { + "type": "image", + "bbox": [ + 0.619, + 0.245, + 0.729, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.662, + 0.331, + 0.686, + 0.34 + ], + "angle": 0, + "content": "SDXL" + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.354, + 0.785, + 0.396 + ], + "angle": 0, + "content": "Fig. 4: Examples of real and AI-generated images of different categories used in our experiments. Top: real images from LSUN, FFHQ, ImageNET and COCO. Bottom: generated images from DiffusionGAN, StyleGAN2, DiT and SDXL." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.427, + 0.785, + 0.473 + ], + "angle": 0, + "content": "Following other papers [11, 43, 51] we measure performance using the area under the ROC curve (AUC) and the balanced accuracy. We also show the influence of the threshold selection on the performance." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.496, + 0.388, + 0.511 + ], + "angle": 0, + "content": "4.2 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.523, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Features analysis. First, we want to provide a better insight into the role and importance of the features described in Section 3.4: \\( D^{(0)} \\) (the 0-level coding cost gap), its slope \\( \\varDelta^{01} = D^{(0)} - D^{(1)} \\) and their absolute values. To this end, we consider the set of real and synthetic (DALL-E 2, GLIDE, Midjourney, SDXL) images of the Synthbuster dataset [3]. We note, in passing, that this dataset includes only uncompressed images, which dispels any possible doubt that our method exploits some JPEG compression bias between real and fake images [28]. Some selected scatter plots and graphs are shown in Fig.5. The rightmost box shows that encoding cost (NLL) and entropy (\\( H \\)) alone are not very discriminating, even if computed at the more informative level 0 (high resolution). In contrast, their difference, the 0-level coding cost gap \\( D^{(0)} \\), seems to separate the different classes quite well (central box), in particular the real class (violet) from the others. Note that the level-1 gap (not shown) is not equally discriminating, and the level-2 gap, plotted on the \\( y \\) axis, turns out to be essentially useless. In the third box we plot the empirical distributions of \\( D^{(0)} \\) for the various classes. This representation makes the good separability of the classes further clear but also highlights an unexpected phenomenon: GLIDE images group mostly to the left of the real class, that is, they have a lower-than-expected coding cost. Although not in line with our initial hypotheses, this fact nevertheless represents an anomaly, which can be detected by thresholding the absolute value of the statistic rather than the statistic itself." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "image", + "bbox": [ + 0.235, + 0.145, + 0.768, + 0.295 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.307, + 0.785, + 0.378 + ], + "angle": 0, + "content": "Fig. 5: Decision statistics. NLL and entropy by themselves are not discriminant (left). Their difference (center) is much more useful for detection, but only at high resolution, \\( D^{(0)} \\), while \\( D^{(1)} \\) is less discriminant and \\( D^{(2)} \\) basically useless. Right box shows histograms of \\( D^{(0)} \\) for real and synthetic images. Note that for GLIDE, \\( D^{(0)} \\) is negative, on the average. Good discrimination is still possible based on the absolute value." + }, + { + "type": "image", + "bbox": [ + 0.286, + 0.399, + 0.744, + 0.606 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.616, + 0.788, + 0.677 + ], + "angle": 0, + "content": "Fig. 6: AUC of proposed method as a function of decision statistic (see Section 3.4) and dataset of real images used to train the lossless encoder: Open Images, LAION, COCO, and their augmented versions \\((^{*})\\). Synthetic test images are selected to match the corresponding real test images: ImageNet (top), and LAION (bottom)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.704, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Influence of the real class. To better understand the role of the real dataset used to train the lossless encoder, we perform an experiment in which we vary it. Along with the original encoder pre-trained on the Open Images dataset [40] (about 338k high-resolution images), we consider two other versions, trained from scratch on the LAION dataset [66] (\\(\\simeq 117\\mathrm{k}\\)), and the COCO dataset [42] (\\(\\simeq 106\\mathrm{k}\\)), respectively, using the same hyperparameters as [6]. Additionally, we consider versions (marked with *) trained on the same datasets, augmented with JPEG compressed images with quality between 80 and 100. We compute the performance in terms of AUC on two different datasets of synthetic and" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.378, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.189 + ], + "angle": 0, + "content": "Table 1: Reference methods. For each one we indicate the key idea, the datasets of real and synthetic images used for training with their sizes, whether or not augmentation is used, the test strategy." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.199, + 0.785, + 0.365 + ], + "angle": 0, + "content": "
Acronym [ref]Idea/ApproachTraining Real/FakeSize(K)Augment.Test Strategy
Wang2020 [74]High diversityLSUN/ProGAN360/360global pooling
PatchFor. [7]Patch-basedCelebA,FF/various84/272resizing
Liu2022 [43]Noise-basedLSUN/ProGAN360/360global pooling
Corvi2023 [10]No-downsamplingCOCO,LSUN/Latent180/180global pooling
LGrad [72]Gradient-basedLSUN/ProGAN72/72resizing
DIRE [75]InversionLSUN-Bed/ADM40/40resizing
DE-FAKE [67]Prompt-basedLSUN/Stable Diff.20/20resizing
Ojha2023 [51]CLIPLSUN/ProGAN360/360cropping
NPR [71]ResidualLSUN/ProGAN72/72resizing
AEROBLADE [60]AE rec. error- / -- / -global distance
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.395, + 0.789, + 0.489 + ], + "angle": 0, + "content": "real images, where this latter class comes from ImageNet [15] (Fig.6, top) or LAION [66] (Fig.6, bottom). We can observe that the best and more uniform results across the four decision statistics are obtained using \\(\\mathrm{COCO}^*\\), while training on Open Images guarantees good performance if the real class is LAION, but bad performance if it is ImageNet. Additional results are included in the supplementary material." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.51, + 0.416, + 0.526 + ], + "angle": 0, + "content": "4.3 SoTA Comparison" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.538, + 0.789, + 0.765 + ], + "angle": 0, + "content": "In our analysis we include only methods with code and/or pre-trained models publicly available on-line. Eventually, we included 7 CNN-based methods [7,10, 43, 71, 72, 74, 75], 2 CLIP-based methods [51, 67] and a training-free method [60]. A brief summary of these techniques is provided in Tab.1, while a more detailed description is given in the supplementary material. For a fair comparison we avoid testing on ProGAN [36] and Latent Diffusion [61], because a good number of these supervised methods were trained on datasets that include images from these generators. Even so, we have a total of 30 datasets for testing. Results are reported in Tab.2 in terms of AUC, with the best figure for each dataset highlighted in bold. Note that each row is characterized by the name of the generator (e.g., GauGAN) and by a single letter that recalls the set of real images used to train it: S for LSUN, F for FFHQ, I for ImageNet, C for COCO, L for LAION, R for RAISE. This detail allows us to study how the performance depends on the real dataset (but with synthetic images from the same generator and with semantic content aligned with real images)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.789, + 0.842 + ], + "angle": 0, + "content": "First of all, we observe that for most reference methods the average AUC does not exceed \\(80\\%\\). Notable exceptions are the CLIP-based Ojha2023 (88.4%) and the CNN-based Corvi2023 (89.4%). Interestingly, some methods show very different performance when the real class changes. This may be due to JPEG bias as already suggested in [28, 60]. A deeper analysis on this point is presented" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.433, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.145, + 0.785, + 0.174 + ], + "angle": 0, + "content": "Table 2: AUC for reference and proposed methods. Best score in bold with a \\(0.5\\%\\) margin. S = LSUN, F = FFHQ, I = ImageNet, C = COCO, L = LAION, R = RAISE." + }, + { + "type": "table", + "bbox": [ + 0.245, + 0.185, + 0.761, + 0.552 + ], + "angle": 0, + "content": "
Real dataWang2020PatchFor.Lin2022Corvi2023LGradDIREDEFAKEOjha2023NPRAEROBLADEOurs \\( {D}^{\\left( 0\\right) } \\)Ours \\( {\\left| D\\right| }^{\\left( 0\\right) } \\)\\( {\\Delta }^{u1} \\)\\( {\\Delta }^{u1} \\)
C98.980.899.783.881.699.943.8100.89.155.199.899.899.999.999.799.799.799.799.799.799.799.799.7
GauGANC92.785.594.783.477.299.859.059.099.686.851.992.388.695.992.388.695.992.692.692.699.799.799.7
BigGANI94.7100.99.995.973.940.445.999.781.584.0100.100.100.100.100.100.100.100.100.100.100.100.100.
StarGANF98.183.899.789.199.858.339.196.7100.30.096.696.196.796.796.796.796.796.796.596.596.596.5
StyleGAN2S94.985.199.958.482.755.547.691.071.360.143.187.741.188.787.741.188.787.787.787.787.787.7
F
GigaGANI73.761.097.350.576.499.964.394.682.447.572.468.172.468.172.468.172.468.168.168.168.168.1
C79.584.099.690.976.799.987.997.695.580.696.594.094.096.797.396.797.396.797.396.797.396.7
Diff.GANS89.892.699.596.699.549.844.897.4100.43.999.499.499.499.499.499.499.499.599.599.599.599.5
GALIPC89.798.294.387.756.7100.75.698.690.765.098.496.399.799.799.799.799.799.799.799.799.799.7
DALL-EL66.471.795.098.395.299.855.997.399.524.199.295.898.298.298.298.298.298.298.298.298.298.2
DDPMF31.698.422.8100.9.823.150.577.792.481.776.625.293.879.676.625.293.879.679.679.679.679.6
ADMS67.667.670.680.381.152.037.488.294.153.149.553.569.463.159.563.169.463.169.463.171.071.0
I61.081.994.481.172.799.569.185.378.580.387.890.595.395.395.395.395.395.395.392.192.192.1
GLIDEC64.897.496.397.281.599.992.488.895.498.047.888.588.588.588.588.588.588.588.588.588.588.5
R32.295.056.686.550.642.992.272.863.387.723.289.451.165.165.165.165.165.165.165.165.165.1
L72.674.190.886.990.3100.60.295.399.868.754.584.284.284.284.284.284.284.284.284.284.284.2
DiTI58.683.188.0100.56.299.687.477.878.499.889.484.384.384.384.384.384.384.384.384.384.384.3
Stable D. 1.4C68.286.195.3100.54.799.993.397.976.599.848.474.854.674.854.654.654.654.654.654.671.471.4
R37.961.873.4100.50.037.688.087.743.096.999.499.498.798.797.097.097.097.097.097.097.297.2
Stable D. 2C56.578.694.2100.62.899.397.982.389.399.983.090.384.584.584.584.584.584.584.584.584.584.5
R50.238.734.8100.41.435.580.789.544.097.498.596.895.895.895.895.895.895.895.895.895.895.8
SDXLC83.860.889.3100.89.399.594.080.099.387.999.999.999.999.999.999.999.999.999.999.999.999.9
R54.368.431.1100.57.247.184.485.176.769.7100.100.100.100.100.100.99.199.299.299.299.299.2
Deep.-IFC78.062.772.299.968.898.996.992.991.681.991.782.388.488.488.488.488.488.488.488.479.479.4
DALL-E 2C88.552.498.988.278.699.980.697.190.059.3100.100.100.100.100.100.100.100.100.99.999.9
R64.841.970.469.458.644.770.995.239.532.8100.100.100.100.100.100.100.100.100.100.100.
DALL-E 3C65.047.399.5100.88.499.996.286.497.799.799.799.799.598.398.398.398.398.398.398.2
R10.952.70.260.837.947.692.436.448.748.379.166.778.078.178.178.178.178.178.178.1
MidjourneyR40.257.840.7100.56.351.078.166.277.099.099.799.398.598.598.598.598.598.598.598.5
Adobe FireflyR84.849.411.898.040.657.481.497.532.152.873.641.280.880.4
AVG68.373.377.089.468.274.672.988.480.171.283.386.488.888.890.0
" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.583, + 0.787, + 0.809 + ], + "angle": 0, + "content": "in the supplementary material. The proposed zero-shot approach goes above \\(80\\%\\) with all decision statistics, reaching the top value of \\(90.0\\%\\) when \\(|\\varDelta^{01}|\\) is used. Obviously, this is a very good result, but what makes it especially valuable is the absence of any dependence on the generators' models. This point is further stressed by the fact that the AUC remains extremely stable across all test sets, with a minimum of \\(65.1\\%\\) on GLIDE-R. On the contrary, the best competitor, Corvi2023, has a long score of top results but also some very poor ones. suggesting a certain instability, likely due to the presence/absence of specific artifacts in the test images, and eventually the risk of not adapting to models of new conception. We want also to draw the reader's attention on the already mentioned case of GLIDE and on the fact that the proposed method exhibits wildly different results with different decision statistics. In particular, with \\(|D^{(0)}|\\) the AUC is \\(89.4\\%\\) as opposed to the already mentioned \\(65.1\\%\\) with \\(|\\varDelta^{01}|\\). This suggests there may be better ways to exploit the basic \\(\\mathrm{NLL}^{(l)}\\) and \\(H^{(l)}\\), possibly jointly at all levels, to synthesize a better and more stable decision statistics." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.81, + 0.785, + 0.84 + ], + "angle": 0, + "content": "Finally, in Fig.7, we report the accuracy as a function of the decision threshold for the best methods. A separate curve is shown for each real image dataset by" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.378, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.144, + 0.361, + 0.227 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.365, + 0.144, + 0.501, + 0.227 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.144, + 0.642, + 0.227 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.645, + 0.145, + 0.784, + 0.227 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.304, + 0.23, + 0.699, + 0.246 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.257, + 0.788, + 0.328 + ], + "angle": 0, + "content": "Fig. 7: Balanced accuracy as a function of the detection threshold. For each dataset of real images, we average accuracy over all associated synthetic generators. The dotted vertical line indicates the global optimal threshold and the \\(\\times\\) symbol the corresponding accuracy. Note that only for the proposed method all peaks are very close, indicating the presence of a single threshold. Charts for other methods are reported in the Suppl." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.355, + 0.788, + 0.416 + ], + "angle": 0, + "content": "averaging over the associated synthetic generators. Unlike AUC, the accuracy critically depends on the selection of a good threshold and some calibration data may be needed for this purpose. Note that only for the proposed method there is a single good threshold that ensures near-optimal accuracy for all datasets." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.435, + 0.357, + 0.45 + ], + "angle": 0, + "content": "4.4 Limitations" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.457, + 0.788, + 0.565 + ], + "angle": 0, + "content": "Our work was developed to detect whether an image has been fully generated and not to detect local manipulations. However, it could be easily extended to accomplish this task since we already compute a map of local pixel-wise statistics. Furthermore, our approach relies on a model of the real class learned by the encoder. If real images do not satisfy this model, the approach may not perform correctly. For example, if images are highly compressed or resized (as is the case on the web), statistical analysis may not be reliable." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.585, + 0.36, + 0.601 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.614, + 0.789, + 0.841 + ], + "angle": 0, + "content": "We introduced a novel zero-shot forensic detector to distinguish AI-generated images from real ones. Unlike most current methods, our approach does not require fake images during training, which ensures generalization to yet unknown generative models. The idea is to exploit an implicit model of real images and classify off-model images as synthetic. To this end, we leverage an appropriate lossless encoder, trained only on real images, that can predict the probability distribution of each pixel given its context. Synthetic images are expected to not respect this distribution, thus revealing their artificial nature. Our experiments show that the proposed detector is consistently competitive with detectors trained in supervised modality, and outperforms them in terms of generalization ability. We believe that our approach is an important stepping stone towards effective forensic tools that can operate without relying on domain- or method-specific training data. Future work will focus on making the method robust to the most common forms of image impairment, so as to make it suitable for in the wild application." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.433, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.329 + ], + "angle": 0, + "content": "Acknowledgments. We gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Gift. This material is also based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. In addition, this work has received funding by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.354, + 0.323, + 0.37 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.389, + 0.785, + 0.417 + ], + "angle": 0, + "content": "1. Albright, M., McCloskey, S.: Source Generator Attribution via Inversion. In: CVPR Workshop. pp. 96-103 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.418, + 0.785, + 0.46 + ], + "angle": 0, + "content": "2. Amoroso, R., Morelli, D., Cornia, M., Baraldi, L., Del Bimbo, A., Cucchiara, R.: Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images. ACM Trans. Multimedia Comput. Commun. Appl. (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.461, + 0.785, + 0.488 + ], + "angle": 0, + "content": "3. Bammey, Q.: Synthbuster: Towards Detection of Diffusion Model Generated Images. IEEE Open Journal of Signal Processing (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.489, + 0.785, + 0.516 + ], + "angle": 0, + "content": "4. Boháček, M., Farid, H.: A geometric and photometric exploration of GAN and Diffusion synthesized faces. In: CVPR Workshop. pp. 874--883 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.517, + 0.785, + 0.544 + ], + "angle": 0, + "content": "5. Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis. In: ICLR (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.546, + 0.785, + 0.573 + ], + "angle": 0, + "content": "6. Cao, S., Wu, C.Y., Krahenbuhl, P.: Lossless Image Compression through SuperResolution. arXiv preprint arXiv:2004.02872v1 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.574, + 0.785, + 0.601 + ], + "angle": 0, + "content": "7. Chai, L., Bau, D., Lim, S.N., Isola, P.: What Makes Fake Images Detectable? Understanding Properties that Generalize. In: ECCV. pp. 103-120 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.602, + 0.785, + 0.643 + ], + "angle": 0, + "content": "8. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: CVPR. pp. 8789-8797 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.644, + 0.785, + 0.685 + ], + "angle": 0, + "content": "9. Corvi, R., Cozzolino, D., Poggi, G., Nagano, K., Verdoliva, L.: Intriguing properties of synthetic images: from generative adversarial networks to diffusion models. In: CVPR Workshop. pp. 973-982 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.687, + 0.785, + 0.728 + ], + "angle": 0, + "content": "0. Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K., Verdoliva, L.: On the detection of synthetic images generated by diffusion models. In: ICASSP. pp. 1-5 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.729, + 0.785, + 0.77 + ], + "angle": 0, + "content": "1. Cozzolino, D., Poggi, G., Corvi, R., Nießner, M., Verdoliva, L.: Raising the Bar of AI-generated Image Detection with CLIP. In: CVPR Workshop. pp. 4356-4366 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.22, + 0.771, + 0.785, + 0.812 + ], + "angle": 0, + "content": "12. Cozzolino, D., Thies, J., Rössler, A., Riess, C., Nießner, M., Verdoliva, L.: Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.22, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "13. Dang-Nguyen, D.T., Pasquini, C., Conotter, V., Boato, G.: RAISE: A Raw Images Dataset for Digital Image Forensics. In: ACM MMSys. p. 219-224 (2015)" + }, + { + "type": "list", + "bbox": [ + 0.22, + 0.389, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.377, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.189 + ], + "angle": 0, + "content": "14. Dayma, B., Patil, S., Cuenca, P., Saifullah, K., Abraham, T., Lé Khac, P., Melas, L., Ghosh, R.: DALL-E Mini (2021). https://doi.org/10.5281/zenodo.5146400, https://github.com/borisdayma/dalle-mini" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.189, + 0.785, + 0.217 + ], + "angle": 0, + "content": "15. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR. pp. 248-255 (2009)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.217, + 0.785, + 0.242 + ], + "angle": 0, + "content": "16. Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. NeurIPS 34, 8780-8794 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.243, + 0.785, + 0.27 + ], + "angle": 0, + "content": "17. Du, M., Pentyala, S., Li, Y., Hu, X.: Towards Generalizable Deepfake Detection with Locality-Aware AutoEncoder. In: CIKM. pp. 325--334 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.27, + 0.785, + 0.311 + ], + "angle": 0, + "content": "18. Durall, R., Keuper, M., Keuper, J.: Watch Your Up-Convolution: CNN Based Generative Deep Neural Networks Are Failing to Reproduce Spectral Distributions. In: CVPR. pp. 7890-7899 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.311, + 0.785, + 0.338 + ], + "angle": 0, + "content": "19. Epstein, D.C., Jain, I., Wang, O., Zhang, R.: Online Detection of AI-Generated Images. In: ICCV Workshop. pp. 382-392 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.338, + 0.785, + 0.392 + ], + "angle": 0, + "content": "20. Epstein, Z., Hertzmann, A., Herman, L., Mahari, R., Frank, M.R., Groh, M., Schroeder, H., Akten, A.S.M., Fjeld, J., Farid, H., Leach, N., Pentland, A.S., Russakovsky, O.: Art and the science of generative AI: A deeper dive. arXiv preprint arXiv:2306.04141 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.392, + 0.785, + 0.419 + ], + "angle": 0, + "content": "21. Farid, H.: Lighting (in) consistency of paint by text. arXiv preprint arXiv:2207.13744 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.42, + 0.785, + 0.446 + ], + "angle": 0, + "content": "22. Farid, H.: Perspective (in) consistency of paint by text. arXiv preprint arXiv:2206.14617 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.446, + 0.785, + 0.473 + ], + "angle": 0, + "content": "23. Firefly, A.: https://www.adobe.com/sensei/generative-ai/firefly.html (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.473, + 0.785, + 0.514 + ], + "angle": 0, + "content": "24. Frank, J., Eisenhofer, T., Schonherr, L., Fischer, A., Kolossa, D., Holz, T.: Leveraging Frequency Analysis for Deep Fake Image Recognition. In: ICML. pp. 3247-3258 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.514, + 0.785, + 0.556 + ], + "angle": 0, + "content": "25. Gehrmann, S., Strobelt, H., Rush, A.M.: GLTR: Statistical detection and visualization of generated text. In: 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. pp. 111-116 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.556, + 0.785, + 0.596 + ], + "angle": 0, + "content": "26. Ghosal, S.S., Chakraborty, S., Geiping, J., Huang, F., Manocha, D., Bedi, A.S.: Towards possibilities & impossibilities of AI-generated text detection: A survey. arXiv preprint arXiv:2310.15264 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.596, + 0.785, + 0.637 + ], + "angle": 0, + "content": "27. Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdolina, L.: Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In: ICME. pp. 1-6 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.637, + 0.785, + 0.677 + ], + "angle": 0, + "content": "28. Grommelt, P., Weiss, L., Pfreundt, F.J., Keuper, J.: Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets. arXiv preprint arXiv:2403.17608 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.678, + 0.785, + 0.718 + ], + "angle": 0, + "content": "29. Hans, A., Schwarzschild, A., Cherepanova, V., Kazemi, H., Saha, A., Goldblum, M., Geiping, J., Goldstein, T.: Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text. In: ICML (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.719, + 0.785, + 0.759 + ], + "angle": 0, + "content": "30. He, Z., Chen, P.Y., Ho, T.Y.: RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection. arXiv preprint arXiv:2405.20112 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.759, + 0.785, + 0.786 + ], + "angle": 0, + "content": "31. Heikkilä, M.: This artist is dominating AI-generated art. and he's not happy about it. MIT Technology Review (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.786, + 0.785, + 0.813 + ], + "angle": 0, + "content": "32. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. NeurIPS 33, 6840-6851 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.813, + 0.785, + 0.841 + ], + "angle": 0, + "content": "33. Jeon, H., Bang, Y.O., Kim, J., Woo, S.: T-GD: Transferable GAN-generated Images Detection Framework. In: ICML. vol. 119, pp. 4746-4761 (2020)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.841 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "34. Jeong, Y., Kim, D., Ro, Y., Kim, P., Choi, J.: Fingerprint Net: Synthesized Fingerprints for Generated Image Detection. In: ECCV. pp. 76-94 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.175, + 0.785, + 0.203 + ], + "angle": 0, + "content": "35. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: CVPR. pp. 10124-10134 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.203, + 0.785, + 0.231 + ], + "angle": 0, + "content": "36. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. In: ICLR (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.231, + 0.785, + 0.259 + ], + "angle": 0, + "content": "37. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR. pp. 4401-4410 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.259, + 0.785, + 0.286 + ], + "angle": 0, + "content": "38. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: CVPR. pp. 8110-8119 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.286, + 0.785, + 0.314 + ], + "angle": 0, + "content": "39. Konstantinov, M., Shonenkov, A., Bakshandaeva, D., Schuhmann, C., Ivanova, K., Klokova, N.: https://www deepfloyd.ai/deepfloyd-if (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.314, + 0.785, + 0.369 + ], + "angle": 0, + "content": "40. Krasin, I., Duerig, T., Alldrin, N., Ferrari, V., Abu-El-Haija, S., Kuznetsova, A., Rom, H., Uijlings, J., Popov, S., Veit, A., et al.: OpenImages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.37, + 0.785, + 0.411 + ], + "angle": 0, + "content": "41. Lin, L., Gupta, N., Zhang, Y., Ren, H., Liu, C.H., Ding, F., Wang, X., Li, X., Verdoliva, L., Hu, S.: Detecting multimedia generated by large ai models: A survey. arXiv preprint arXiv:2204.06125 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.411, + 0.785, + 0.452 + ], + "angle": 0, + "content": "42. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. pp. 740-755 (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.452, + 0.785, + 0.479 + ], + "angle": 0, + "content": "43. Liu, B., Yang, F., Bi, X., Xiao, B., Li, W., Gao, X.: Detecting generated images by real images. In: ECCV. pp. 95-110 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.48, + 0.785, + 0.521 + ], + "angle": 0, + "content": "44. Liu, H., Tan, Z., Tan, C., Wei, Y., Wang, J., Zhao, Y.: Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection. In: CVPR. pp. 10770-10780 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.522, + 0.785, + 0.549 + ], + "angle": 0, + "content": "45. Mahajan, S., Roth, S.: PixelPyramids: Exact Inference Models from Lossless Image Pyramids. In: ICCV. pp. 6639-6648 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.549, + 0.785, + 0.577 + ], + "angle": 0, + "content": "46. Mandelli, S., Bonettini, N., Bestagini, P., Tubaro, S.: Detecting GAN-generated Images by Orthogonal Training of Multiple CNNs. In: ICIP. pp. 3091-3095 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.577, + 0.785, + 0.605 + ], + "angle": 0, + "content": "47. Marra, F., Saltori, C., Boato, G., Verdoliva, L.: Incremental learning for the detection and classification of GAN-generated images. In: WIFS. pp. 1-6 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.605, + 0.616, + 0.619 + ], + "angle": 0, + "content": "48. Midjourney: https://www.midjourney.com/home (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.619, + 0.785, + 0.66 + ], + "angle": 0, + "content": "49. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C.D., Finn, C.: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. In: ICML. pp. 24950-24962 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.66, + 0.785, + 0.702 + ], + "angle": 0, + "content": "50. Nichol, A.Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., Chen, M.: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diff. Models. In: ICML. pp. 16784-16804 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.702, + 0.785, + 0.729 + ], + "angle": 0, + "content": "51. Ojha, U., Li, Y., Lee, Y.J.: Towards universal fake image detectors that generalize across generative models. In: CVPR. pp. 24480-24489 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.729, + 0.565, + 0.744 + ], + "angle": 0, + "content": "52. OpenAI: https://openai.com/dall-e-3 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.744, + 0.785, + 0.771 + ], + "angle": 0, + "content": "53. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR. pp. 2337-2346 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.771, + 0.785, + 0.798 + ], + "angle": 0, + "content": "54. Peebles, W., Xie, S.: Scalable diffusion models with transformers. In: ICCV. pp. 4195-4205 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.799, + 0.785, + 0.84 + ], + "angle": 0, + "content": "55. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: SDXL: Improving latent diffusion models for high-resolution image synthesis. In: ICLR (2024)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "18" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.377, + 0.128 + ], + "angle": 0, + "content": "Cozzolino et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.147, + 0.786, + 0.189 + ], + "angle": 0, + "content": "56. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML. pp. 8748-8763 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.19, + 0.786, + 0.231 + ], + "angle": 0, + "content": "57. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.231, + 0.786, + 0.272 + ], + "angle": 0, + "content": "58. Reed, S.E., van den Oord, A., Kalchbrenner, N., Colmenarejo, S.G., Wang, Z., Chen, Y., Belov, D., de Freitas, N.: Parallel multiscale autoregressive density estimation. In: ICML. pp. 2912-2921 (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.273, + 0.786, + 0.3 + ], + "angle": 0, + "content": "59. Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the detection of diffusion model deepfakes. In: VISAPP. pp. 446-457 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.3, + 0.786, + 0.342 + ], + "angle": 0, + "content": "60. Ricker, J., Lukovnikov, D., Fischer, A.: AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error. In: CVPR. pp. 9130-9140 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.342, + 0.786, + 0.37 + ], + "angle": 0, + "content": "61. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR. pp. 10684-10695 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.371, + 0.786, + 0.397 + ], + "angle": 0, + "content": "62. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/CompVis/stable-diffusion (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.398, + 0.786, + 0.424 + ], + "angle": 0, + "content": "63. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/Stability-AI/stablediffusion (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.425, + 0.786, + 0.466 + ], + "angle": 0, + "content": "64. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M.: Faceforensics++: Learning to detect manipulated facial images. In: ICCV. pp. 1-11 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.467, + 0.786, + 0.508 + ], + "angle": 0, + "content": "65. Sarkar, A., Mai, H., Mahapatra, A., Lazebnik, S., Forsyth, D.A., Bhattad, A.: Shadows Don't Lie and Lines Can't Bend! Generative Models don't know Projective Geometry... for now. In: CVPR. pp. 28140-28149 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.508, + 0.786, + 0.55 + ], + "angle": 0, + "content": "66. Schuhmann, C., Kaczmarczyk, R., Komatsuzaki, A., Katta, A., Vencu, R., Beaumont, R., Jitsev, J., Coombes, T., Mullis, C.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In: NeurIPS (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.55, + 0.786, + 0.591 + ], + "angle": 0, + "content": "67. Sha, Z., Li, Z., Yu, N., Zhang, Y.: DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. In: ACM SIGSAC. pp. 3418-3432 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.591, + 0.786, + 0.619 + ], + "angle": 0, + "content": "68. Sinitsa, S., Fried, O.: Deep Image Fingerprint: Towards Low Budget Synthetic Image Detection and Model Lineage Analysis. In: WACV. pp. 4067-4076 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.619, + 0.786, + 0.66 + ], + "angle": 0, + "content": "69. Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J.W., Kreps, S., et al.: Release Strategies and the Social Impacts of Language Models. arXiv preprint arXiv:1908.09203 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.66, + 0.786, + 0.702 + ], + "angle": 0, + "content": "70. Su, J., Zhuo, T.Y., Wang, D., Nakov, P.: DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text. In: Conference on Empirical Methods in Natural Language Processing (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.702, + 0.786, + 0.743 + ], + "angle": 0, + "content": "71. Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., Wei, Y.: Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection. In: CVPR. pp. 28130-28139 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.743, + 0.786, + 0.785 + ], + "angle": 0, + "content": "72. Tan, C., Zhao, Y., Wei, S., Gu, G., Wei, Y.: Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection. In: CVPR. pp. 12105-12114 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.785, + 0.786, + 0.813 + ], + "angle": 0, + "content": "73. Tao, M., Bao, B.K., Tang, H., Xu, C.: Galip: Generative adversarial clips for text-to-image synthesis. In: CVPR. pp. 14214-14223 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.813, + 0.786, + 0.841 + ], + "angle": 0, + "content": "74. Wang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now. In: CVPR. pp. 8692-8701 (2020)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.786, + 0.841 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.432, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Detection of AI-Generated Images" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "19" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "75. Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. ICCV pp. 22445-22455 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.175, + 0.787, + 0.203 + ], + "angle": 0, + "content": "76. Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. In: ICLR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.203, + 0.785, + 0.245 + ], + "angle": 0, + "content": "77. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.245, + 0.785, + 0.273 + ], + "angle": 0, + "content": "78. Zhang, X., Karaman, S., Chang, S.F.: Detecting and Simulating Artifacts in GAN Fake Images. In: WIFS. pp. 1-6 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.273, + 0.785, + 0.314 + ], + "angle": 0, + "content": "79. Zhong, N., Xu, Y., Qian, Z., Zhang, X.: Rich and Poor Texture Contrast: A Simple yet Effective Approach for AI-generated Image Detection. arXiv preprint arXiv:2311.12397v1 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.787, + 0.314 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_origin.pdf b/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2e2be73a28df2fd2b4cd03ac52d981c0af2bd812 --- /dev/null +++ b/2024/Zero-Shot Detection of AI-Generated Images/6a7701df-63a3-43ae-9803-224606ec44ab_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67587d1571a18e92c0960f2f3868e9d2fd511cf733f6baa7920956827c99f12b +size 2421394 diff --git a/2024/Zero-Shot Detection of AI-Generated Images/full.md b/2024/Zero-Shot Detection of AI-Generated Images/full.md new file mode 100644 index 0000000000000000000000000000000000000000..69625e178c3217dd990dcfc37db5c74b35161e3f --- /dev/null +++ b/2024/Zero-Shot Detection of AI-Generated Images/full.md @@ -0,0 +1,309 @@ +# Zero-Shot Detection of AI-Generated Images + +Davide Cozzolino $^{1}$ , Giovanni Poggi $^{1}$ , Matthias Nießner $^{2}$ , and Luisa Verdoliva $^{1,2}$ + +1 University Federico II of Naples, 80125 Naples, Italy + +2 Technical University of Munich, 85748 Garching, Germany {davide.cozzolino, poggi, verdoliv}@unina.it, niessner@tum.de + +Abstract. Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALL-E, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image. Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than $3\%$ over the SoTA in terms of accuracy. Code is available at https://grip-unina.github.io/ZED/. + +# 1 Introduction + +The quality of AI-generated images has improved tremendously in recent years, to the point where they are virtually indistinguishable from real images upon visual inspection. In addition, the latest generators are widely available online and allow easy creation and retouching of images based on simple textual prompts. All this opens the way to endless application opportunities in a variety of fields, from the creative arts to industries of all kinds. However, on the flip side, such tools can be also used for malicious purposes, thus posing serious threats to our society. For example, pre-trained generators can be easily optimized to generate fake works by a specific artist [31], or used to orchestrate effective, large-scale disinformation campaigns to influence public opinion in advanced democracies [20]. These immediate risks create an urgent need for reliable and automated detection of AI-generated images [41]. + +![](images/1861bae1c211181a4ebb9c70feb93a8a2ecf71a22074b8febb69e2f5c4f61f21.jpg) +Fig. 1: ZED leverages the intrinsic model of real images learned by a state-of-the-art lossless image coder. For real images, the model is correct and the actual coding cost is close its expected value. Synthetic images have different statistics than real images, so they "surprise" the encoder, and the actual coding cost differs significantly from its expected value. This is evident from the graphic on the right that shows how the coding cost gap increases for synthetic images much more than for real ones when predicting high resolution details from low resolution data. + +![](images/7df157d8f47b6ef8c4e992f84e6981c61fe476db9c268abc7921986a937978cf.jpg) + +Until very recently, supervised learning paradigms dominated the image forensics community, with deep models trained on large datasets of real and fake images [64]. These approaches, however, are tailored to specific domains and are difficult to generalize to unseen deepfake samples. In the seminal paper by Wang et al. [74], it is shown that a simple detector trained only on ProGAN images from 20 different categories generalizes well to other images created by different generative adversarial networks (GAN) thanks to suitable augmentation. However, performance still suffers on images generated by prompt-driven diffusion models (DM). Similarly, a detector suitably trained on Latent DM images performs well on all other DM images but fails to generalize properly on GAN images [10]. To reduce the dependence on training data, recent works [2, 11, 51, 67] rely on general-purpose features extracted by pre-trained visual-language models, such as CLIP (Contrastive Language-Image Pre-Training) [56]. Despite the good performance, these methods still depend on the choice of the training dataset. A recent trend to improve generalization is based on few-shot methods [12, 17, 33] which can partially solve the problem, but still require some prior knowledge of the target models, even if limited to a few images. With this work we make a step further and develop an approach that is not influenced at all by newer and previously unseen generative models. + +To this end, we propose a zero-shot detection method that only requires real images for learning their underlying distribution. Our key idea is to use lossless coding and a multi-resolution prediction strategy for computing conditional distributions of all image pixels at three different levels of resolution. Given such distributions, we compute statistics related to the actual and expected coding cost. If the image is coherent with the predicted distribution (no surprise), then there is no mismatch and the image under analysis is labelled as real. We expect synthetic images to be characterized by a higher coding cost under the distribution of real images (see Fig. 1). Based on this intuition, we design discriminative features that measure how well the image under test fits the model of real images embedded in the encoder. Even by using a single feature, we can obtain + +significant performance above $95\%$ in terms of AUC for several recent models, such as DALL·E, Midjourney, and SDXL. + +In summary, the main contributions of this paper are the following: + +- we propose a zero-shot detector of artificially generated images: no fake images are necessary for training which guarantees independence from any specific generation method; +- this is the first work that exploits an implicit model of real images, learnt for lossless encoding to address image forensics task; +- our experiments show on a wide variety of generative models that even using a single feature the proposed detector provides state-of-the-art results $(+3.4\%$ in terms of accuracy). + +# 2 Related work + +Supervised learning. The problem of distinguishing synthetic images from real ones is commonly formulated as a binary classification task. State-of-the-art methods explicitly or implicitly exploit forensic artifacts by leveraging a large amount of real and generated images. Some of them rely on semantic flaws, such as face asymmetries [4] or incorrect perspective, lighting, shadows [21, 22, 65]. However, technology advances very quickly and such errors will very likely disappear in next-generation tools. Therefore, most methods focus on low-level and inconspicuous artifacts [9, 18]. Major efforts have been made to prevent conventional supervised detectors from overfitting the training data. Popular recipes include using datasets as varied as possible with intense augmentation [74], pre-training models on large general-purpose datasets [46], preserving fine-grain details of images [7, 27], exploiting high-frequency artifacts in the spatial [43, 68, 72] or Fourier domain [18, 24, 78], leveraging inter-pixel correlation discrepancies [71, 79], adopting inversion techniques [1, 75]. + +With the advent of diffusion models that presents significant architectural differences with GANs, the importance to design methods that work equally well on known and unknown sources became even more evident [10]. An important finding was the increased generalization that could be achieved using pre-trained large vision-language models, such as CLIP-ViT [51]. In this case only a lightweight linear classifier is trained on top of these features to adapt to the forensic task. Very good performance is obtained on DMs even if the network was trained only on GANs. Other methods also show the potential of such approach [2, 11, 59], sometimes including multimodal features [44, 67]. + +Some supervised methods assume to have only real images available and create the synthetic images needed for training by simulating the artifacts introduced by a generator, for example by passing real images through an autoencoder [24,34,78]. The more generative architectures are simulated, the more effective is the detector. Of course, the performance degrades on images generated by an architecture not considered in the simulation phase. Differently from all these methods our approach does not require collecting or generating synthetic images thus avoiding any type of dependence on this class. + +Few-shot/incremental learning. A significant step towards improved generalization is the use of few-shot or incremental learning strategies [12, 17, 33, 47]. Along this path, a recent work [19] proposes to regularly re-train a detector on new synthetic generators in the very same temporal order of their release, as in a real-world scenario. Results show a good generalization to unseen models, but only as long as the architecture of new generators is similar to that of old ones. Although few-shot methods represent an important progress in reducing the dependence on training data, the ultimate goal is to remove this dependence entirely to ensure maximum generalization. In pursuit of this goal, in this work we propose a truly zero-shot detector. + +Zero-shot learning. Only a few very recent papers avoid training on synthetic data altogether. A solution was proposed in [60] based on the observation that synthetic images are reconstructed more accurately than real images by a latent DM autoencoder. The main limitation is that the method only reliably detects images generated by latent diffusion models. The method in [30], instead, exploits the fact that small perturbations of [real/synthetic] images correspond to [small/large] variations in the embedding space of a pre-trained large model. Differently from these strategies our work takes inspiration from some interesting proposals that have recently appeared for synthetic text detection [25,29,49,69]. They exploit the fact that LLMs (Large Language Models) work by generating the probability distribution of the next token given the previous ones. In the generation phase, new tokens are sequentially added to a sentence based on these distributions. In the analysis phase, one can replicate the process for a given sentence under test and measure how well the actual tokens match the predicted ones. A good match suggests that the sentence was indeed generated by an LLM. Although inspired by these methods, our zero-shot synthetic image detector differs from them because it leverages a model of real images and does not depend in any way on synthetic data or generators. Moreover, to build the model we take advantage of the remarkable field-proved ability of lossless encoders to accurately describe pixels based on their context. + +# 3 Method + +# 3.1 Background + +Here we provide some background on zero-shot methods that leverage large pre-trained language models for machine-generated text detection. They exploit the native functionality of these models to provide next-token predictions [29]. Before a string of characters $s$ can be processed by a language model, it must be parsed into a sequence of tokens (mostly words). The tokenizer $T$ outputs a list of indices + +$$ +T: s \rightarrow \left\{x _ {0}, x _ {1}, \dots , x _ {L} \right\}, \tag {1} +$$ + +where $x_{i} \in \{1, \dots, n\}$ is the index of the $i$ -th token of the sequence, addressing a size- $n$ vocabulary of tokens. The language model operates by predicting the next + +index-token given the list of previous ones, thereby allowing for the generation of a full sentence given just a short prompt. Actually, language models output more information than just the index of the most likely token. Given the list of previous indices $X_{i} = \{x_{0},\ldots ,x_{i - 1}\}$ , they provide the probability of all possible values of the current one, that is, $P(x_{i} = k|X_{i})$ , for $k = 1,\dots ,n$ . + +The idea is to exploit this functionality to measure the conformity of the string under analysis to the LLM intrinsic model of language. That is, these methods try to answer the question "How likely is it that this sentence was generated by my LLM?" Hence they compute (for free) the likelihood of the given list of indices under the probability distribution learned by the LLM + +$$ +P \left(x _ {0}, \dots , x _ {L}\right) = P \left(x _ {0}\right) \cdot P \left(x _ {1} \mid x _ {0}\right) \cdot \dots \cdot P \left(x _ {L} \mid x _ {0}, \dots , x _ {L - 1}\right) = P \left(x _ {0}\right) \prod_ {i = 1} ^ {L} P \left(x _ {i} \mid X _ {i}\right) \tag {2} +$$ + +In practice, the negative log-likelihood (also called log-perplexity) is computed instead, that is (neglecting $x_0$ ) + +$$ +\mathrm {N L L} = - \sum_ {i = 1} ^ {L} \log P \left(x _ {i} \mid X _ {i}\right) \tag {3} +$$ + +If the $i$ -th observed index $x_{i}$ was very likely to come after the previous ones, namely, it is not surprising, its contribution to the NLL is close to 0. On the contrary, if it was unlikely to appear, given the previous ones (an anomaly) it impacts significantly on the NLL. Overall, a sequence with low NLL is likely to have been generated by the LLM, and will be therefore detected as synthetic. Of course, this basic description is only meant to convey the general concepts, the reader is referred to the literature [26] for more details. + +# 3.2 From Text to Images + +When we try to translate the above concepts into the realm of images, we run into a big problem: the most effective and popular image generation engines do not provide anything similar to the next token distribution observed in the case of LLMs. Indeed, there exist some autoregressive synthesis methods [45,58] that could be adapted to this task, but their generation approach is very different from those of the most popular GAN- and DM-based methods. Therefore in this work we change perspective or, better said, we now assume the correct one-class perspective, and look for a model of real images, rather than synthetic ones. Armed with such a model, we will be able to decide whether a given image is unsurprising, therefore real, or somewhat anomalous, therefore synthetic, regardless of the specific generation model used to create it. + +Now, the concepts of prediction, surprise, perplexity, along with information measure and entropy, are pervasive in the literature on image coding, part of information theory. Lossless image encoders typically include a predictor that, given a suitable context, estimates the value of the target pixel, and an entropy + +encoder that efficiently represents prediction errors. Indeed, by analyzing the recent literature in the field we managed to single out a tool that perfectly suits our needs, the Super-Resolution based lossless Compressor (SReC) proposed by Cao et al. [6], which provides a computationally lightweight tool for predicting the distribution of image pixels at multiple resolution. + +# 3.3 Super-resolution based Lossless Compressor + +Here we present a high-level description of SReC, focusing only on the aspects more relevant for our purposes. The interested reader is referred to the original paper for details [6]. The general idea is to train a neural network to predict the current pixel, $x_{i,j}$ , given a set of previously coded pixels, and encode the difference between the true pixel value and its prediction. However, this purely autoregressive formulation is highly impractical, as it implies long encoding/decoding times. Therefore, SReC uses a multi-resolution prediction strategy. A low-resolution version $y^{(1)}$ of the original image $x^{(0)}$ is built through $2\times 2$ average pooling, that is + +$$ +y _ {i, j} ^ {(1)} = \frac {x _ {2 i , 2 j} ^ {(0)} + x _ {2 i + 1 , 2 j} ^ {(0)} + x _ {2 i , 2 j + 1} ^ {(0)} + x _ {2 i + 1 , 2 j + 1} ^ {(0)}}{4} \tag {4} +$$ + +Then, each four-pixel group of the high-resolution image is predicted based only on the low-resolution image, independent of other groups at the same resolution level, allowing for parallel processing and high-speed encoding. Since the fourth pixel of a group is known, given the other three and the low resolution image, the conditional joint distribution of the group reads + +$$ +\begin{array}{l} P \left(x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, x _ {2 i, 2 j + 1} ^ {(0)} \mid Y _ {i, j} ^ {(1)}\right) = P \left(x _ {2 i, 2 j} ^ {(0)} \mid Y _ {i, j} ^ {(1)}\right) \cdot P \left(x _ {2 i + 1, 2 j} ^ {(0)} \mid x _ {2 i, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}\right) \tag {5} \\ \cdot P (x _ {2 i, 2 j + 1} ^ {(0)} | x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}) \\ \end{array} +$$ + +where $Y_{i,j}^{(1)}$ is the relevant context in the lower resolution image, that is a receptive field centered on $y_{i,j}^{(1)}$ . Each term in this factorization is estimated by a dedicated convolutional neural network (CNN). In particular, a parametric distribution is assumed, given by the mixture of $K$ discrete logistic distributions, + +$$ +P (x | X) = \sum_ {k = 1} ^ {K} w _ {k} \operatorname {l o g i s t i c} \left(x \mid \mu_ {k}, s _ {k}\right) \tag {6} +$$ + +where $\mathrm{logistic}(x|\mu, s) = \sigma\left(\frac{x - \mu + 0.5}{s}\right) - \sigma\left(\frac{x + \mu + 0.5}{s}\right)$ is the difference of two sigmoid functions, with position parameter $\mu$ and scale parameter $s$ , and $K = 10$ is always assumed. The CNN takes the context $X$ of the pixel of interest as input and outputs the weights of the mixture together with the position and scale parameters of all logistics. In turn, these parameters allow one to compute the desired distribution. This whole process is replicated on two more lower-resolution scales, for a total of four levels, the lowest resolution, an $8 \times 8$ subsampled "prompt" + +![](images/efc16c4e1e602383bd83a3a98e2d204a0bb468d420e4cb55038d4ab3ccbcebd8.jpg) +Reale +Fig. 2: NLL and Entropy. We compute the spatial distribution of NLL and Entropy at three resolutions. For real images (top) the paired maps are very similar at all scales: when the uncertainty on a pixel (entropy) grows, also the coding cost (NLL) does. Therefore, the NLL-Entropy difference maps are all very dark. For synthetic images (bottom) NLL and Entropy maps are not always similar, because the model is not correct, and hence the difference maps are brighter, especially the high-resolution map. + +image, coded in clear, and three higher resolution images, each one predicted from its lower resolution version. All networks are trained to minimize the cross entropy between the predicted model probability $P_{\theta}(x)$ and the empirical data distribution $P(x)$ given by the training image dataset. We mention in passing that this loss is closely related to the log-perplexity considered for text synthesis. + +To summarize, SReC provides us with a lightweight tool for computing conditional distributions of all image pixels at three different levels of resolution, and therefore to compute all kinds of statistics that can expose the mismatch between a test image and the learned model. Considering that SReC achieves state-of-the-art performance in lossless image compression, one can also argue that the learned model of real images is very accurate. Given this tool, we can now design a zero-shot detector of synthetic images. + +# 3.4 Features and Decision Statistics + +Let $x \in \{0, \ldots, 255\}^{N \times M \times 3}$ be the image under test. In our multi-resolution framework, this will be the highest-resolution version, $x^{(0)} = x$ . Through $2 \times 2$ average pooling, we generate a lower resolution version $y^{(1)} = \mathrm{avpool}(x^{(0)})$ , and then, through rounding, its integer-valued version $x^{(1)} = \mathrm{round}(y^{(1)})$ . The process is repeated, and eventually we have four integer versions of the image $\{x^{(0)}, x^{(1)}, x^{(2)}, x^{(3)}\}$ , together with three non-integer versions $\{y^{(1)}, y^{(2)}, y^{(3)}\}$ . In the context of lossless coding, the lowest resolution version, $x^{(3)}$ , must be sent in clear together with the rounding bits at levels 3, 2, and 1, but we mention this only for completeness and for a more compelling interpretation of results. The CNNs trained on real images provide the predicted probability distribution + +![](images/21fdcf61e015902664f93189c579d3c6e08e3d04b3288e0f29544ae1cf64a3df.jpg) +Fig. 3: Extracting decision statistics. The full resolution image $x^{(0)}$ is downsampled three times. The lowest-resolution version, $x^{(3)}$ , feeds the level-2 CNN, which outputs the probability distributions of level-2 pixels. These distributions, together with the actual level-2 pixels, are used to compute the level-2 coding cost $\mathrm{NLL}^{(2)}$ and its expected value $H^{(2)}$ . All these steps are then repeated for levels 1 and 0. Eventually, NLLs and entropies are combined to compute the decision statistics. + +for all pixels $^3$ of levels 0, 1, and 2 + +$$ +P \left(x _ {i, j} ^ {(l)} = k \mid X _ {i, j} ^ {(l)}\right) \tag {7} +$$ + +where $k \in \{0, \dots, 255\}$ and $X_{i,j}^{(l)}$ is the context for pixel $x_{i,j}^{(l)}$ , including a portion of the lower-resolution image $y^{(l+1)}$ and possibly some same-resolution neighbors of the current pixel. Given the above distribution, we compute the negative log likelihood and the entropy at each pixel + +$$ +\begin{array}{l} \mathrm {N L L} _ {i, j} ^ {(l)} = - \log P (x _ {i, j} ^ {(l)} | X _ {i, j} ^ {(l)}) \\ H _ {i, j} ^ {(l)} = - \sum_ {k} P (k | X _ {i, j} ^ {(l)}) \log P (k | X _ {i, j} ^ {(l)}) \tag {8} \\ \end{array} +$$ + +These quantities are shown in Fig.2 for two sample images, real and synthetic. Then, through spatial averaging, we obtain the corresponding quantities for the images at all resolution levels $\mathrm{NLL}^{(l)} = \langle \mathrm{NLL}_{i,j}^{(l)}\rangle$ and $H^{(l)} = \langle H_{i,j}^{(l)}\rangle$ , for $l = 0,1,2$ . These are the features associated by the system to input image and our decision statistics will be suitable combinations of them. + +Before going on, it is convenient to give a physical interpretation of these quantities. Each NLL can be interpreted as the actual coding cost for the corresponding image. While each entropy can be interpreted as the expected value of the coding cost given the context, when the image is coherent with the predicted distribution. In the presence of a mismatch, $\mathrm{NLL} - H > 0$ , on the average, with a gap that increases with increasing distribution mismatch. Our fundamental assumption is that the trained CNNs provide a good model of real images, and synthetic images tend not to follow the same model. Therefore, we expect that synthetic images are characterized by higher coding cost, hence higher NLL, under this distribution. This observation would lead us to use the NLLs as decision + +statistics. However, the coding cost does not depend only on the distribution mismatch but also (predominantly) on the intrinsic information content of the image, measured by the entropy. A complex image, say a photo of a crowd, is more difficult to encode/describe than a smooth image, say a blue sky, no matter what model we use. Therefore, to get rid of this bias, we consider the coding cost gap, defined as the difference $D^{(l)} = \mathrm{NLL}^{(l)} - H^{(l)}$ , as decision statistic. Hence, for each image, we have three basic decision statistics, one for each resolution level. It is worth observing that some forms of normalization are adopted for machine generated text detection as well [29, 49, 70]. A block diagram of our method is shown in Fig.3. + +A sample graph of the coding cost gap is shown in Fig.1, on the right. For real images and three families of synthetic images we report the average gap (solid line) plus/minus its standard deviation (colored band) for the various resolutions levels. Two important observations can be made. First of all, the level-0 coding cost gap, concerning the full resolution image, seems to be much more discriminant than the others. Moreover, the gap grows much faster for synthetic images than for real images when going from level 1 to level 0. Therefore, as decision statistics we will consider both $D^{(0)}$ (the level-0 coding cost gap) and $\Delta^{01} = D^{(0)} - D^{(1)}$ (its slope). In addition, in preliminary experiments we observed that synthetic images are sometimes characterized by a coding cost much lower rather than much higher than expected, that is the NLL is much lower than the entropy. This is also an anomaly, which signals the likely synthetic nature of the image. Therefore, besides the above statistics we also consider their absolute values $|D^{(0)}|$ and $|\Delta^{(01)}|$ . These observations are supported by the sample graphical analysis shown in Fig.5 in the ablation study. + +# 4 Results + +# 4.1 Datasets and Metrics + +We benchmarked our model on a large variety of synthetic generators both GANs and DMs: GauGAN [53], BigGAN [5], StarGAN [8], StyleGAN2 [38], DiffusionGAN [76], GigaGAN [35], GALIP [73], DDPM [32], ADM [16], GLIDE [50], Stable Diffusion [62, 63], DiT [54], DeepFloyd-IF [39], Stable Diffusion XL [55], DALL-E [14], DALL-E 2 [57], DALL-E 3 [52], Midjourney V5 [48], and Adobe Firefly [23]. We collected images from publicly available datasets [3,10,51,74] and generated additional images as needed when they were not publicly available. We ensured that all datasets included pristine and synthetic images with similar semantic content, both compressed and uncompressed, to avoid any kind of bias (see Fig.4). For some synthetic generators we have multiple datasets, built on the basis of different real image datasets LSUN [77], FFHQ [37], ImageNet [15], COCO [42], LAION [66] and RAISE [13]. This is a fortunate circumstance: we kept them carefully separate as this allows us to analyze how the performance of a detector depends on the class of real images used in the synthesis phase. Overall we used a total of $29\mathrm{k}$ synthetic images and $6\mathrm{k}$ real images. More details on the generated and actual images are provided in the supplementary material. + +![](images/231219b8aa647713db5823eb166fc61ec2b1b695db14bba71ce64e96e2058439.jpg) +LSUN + +![](images/bb239cbf94a53ccb942aec78d1e3ee4b36954e1eb7f2c502e43523006d518b25.jpg) +FFHQ + +![](images/4a115c6e880666decef07509d79bc9ba88b0390e613c14fe5dc880a36a060486.jpg) +ImageNet + +![](images/b7bfe6362a66142dbd2a2f70ca76f862faa3a7c5bee546a67beb53a1ed0ef7d0.jpg) +COCO + +![](images/9e37f264c6fba4e67044aa0bce9e3ee7cc4a3416751b1ccf134f277112e9f7a6.jpg) +Diffusion-GAN + +![](images/943fc5ca86f34081a895e86e759e04053730a83344509087c558dbc13ff9aad0.jpg) +StyleGAN2 + +![](images/83d9e8f5b5175e4b859f3d3a120ac90a5d575b32bbc9550e813c73b4bd92c395.jpg) +DiT +Fig. 4: Examples of real and AI-generated images of different categories used in our experiments. Top: real images from LSUN, FFHQ, ImageNET and COCO. Bottom: generated images from DiffusionGAN, StyleGAN2, DiT and SDXL. + +![](images/11001dc1a4c83fd4e760e098c0c48c0af1508c6ba61af9b46c97fd33aba88f7f.jpg) +SDXL + +Following other papers [11, 43, 51] we measure performance using the area under the ROC curve (AUC) and the balanced accuracy. We also show the influence of the threshold selection on the performance. + +# 4.2 Ablation Study + +Features analysis. First, we want to provide a better insight into the role and importance of the features described in Section 3.4: $D^{(0)}$ (the 0-level coding cost gap), its slope $\varDelta^{01} = D^{(0)} - D^{(1)}$ and their absolute values. To this end, we consider the set of real and synthetic (DALL-E 2, GLIDE, Midjourney, SDXL) images of the Synthbuster dataset [3]. We note, in passing, that this dataset includes only uncompressed images, which dispels any possible doubt that our method exploits some JPEG compression bias between real and fake images [28]. Some selected scatter plots and graphs are shown in Fig.5. The rightmost box shows that encoding cost (NLL) and entropy ( $H$ ) alone are not very discriminating, even if computed at the more informative level 0 (high resolution). In contrast, their difference, the 0-level coding cost gap $D^{(0)}$ , seems to separate the different classes quite well (central box), in particular the real class (violet) from the others. Note that the level-1 gap (not shown) is not equally discriminating, and the level-2 gap, plotted on the $y$ axis, turns out to be essentially useless. In the third box we plot the empirical distributions of $D^{(0)}$ for the various classes. This representation makes the good separability of the classes further clear but also highlights an unexpected phenomenon: GLIDE images group mostly to the left of the real class, that is, they have a lower-than-expected coding cost. Although not in line with our initial hypotheses, this fact nevertheless represents an anomaly, which can be detected by thresholding the absolute value of the statistic rather than the statistic itself. + +![](images/e0d1f90588a1d2d1fe7366bc64d08cf8c2465ccdafa765b49781168c5e54eaaf.jpg) +Fig. 5: Decision statistics. NLL and entropy by themselves are not discriminant (left). Their difference (center) is much more useful for detection, but only at high resolution, $D^{(0)}$ , while $D^{(1)}$ is less discriminant and $D^{(2)}$ basically useless. Right box shows histograms of $D^{(0)}$ for real and synthetic images. Note that for GLIDE, $D^{(0)}$ is negative, on the average. Good discrimination is still possible based on the absolute value. + +![](images/9decf8c6e1180ab5e73dda0f803d59989cc177f1486b60612b4544c92cec3c53.jpg) +Fig. 6: AUC of proposed method as a function of decision statistic (see Section 3.4) and dataset of real images used to train the lossless encoder: Open Images, LAION, COCO, and their augmented versions $(^{*})$ . Synthetic test images are selected to match the corresponding real test images: ImageNet (top), and LAION (bottom). + +Influence of the real class. To better understand the role of the real dataset used to train the lossless encoder, we perform an experiment in which we vary it. Along with the original encoder pre-trained on the Open Images dataset [40] (about 338k high-resolution images), we consider two other versions, trained from scratch on the LAION dataset [66] ( $\simeq 117\mathrm{k}$ ), and the COCO dataset [42] ( $\simeq 106\mathrm{k}$ ), respectively, using the same hyperparameters as [6]. Additionally, we consider versions (marked with *) trained on the same datasets, augmented with JPEG compressed images with quality between 80 and 100. We compute the performance in terms of AUC on two different datasets of synthetic and + +Table 1: Reference methods. For each one we indicate the key idea, the datasets of real and synthetic images used for training with their sizes, whether or not augmentation is used, the test strategy. + +
Acronym [ref]Idea/ApproachTraining Real/FakeSize(K)Augment.Test Strategy
Wang2020 [74]High diversityLSUN/ProGAN360/360global pooling
PatchFor. [7]Patch-basedCelebA,FF/various84/272resizing
Liu2022 [43]Noise-basedLSUN/ProGAN360/360global pooling
Corvi2023 [10]No-downsamplingCOCO,LSUN/Latent180/180global pooling
LGrad [72]Gradient-basedLSUN/ProGAN72/72resizing
DIRE [75]InversionLSUN-Bed/ADM40/40resizing
DE-FAKE [67]Prompt-basedLSUN/Stable Diff.20/20resizing
Ojha2023 [51]CLIPLSUN/ProGAN360/360cropping
NPR [71]ResidualLSUN/ProGAN72/72resizing
AEROBLADE [60]AE rec. error- / -- / -global distance
+ +real images, where this latter class comes from ImageNet [15] (Fig.6, top) or LAION [66] (Fig.6, bottom). We can observe that the best and more uniform results across the four decision statistics are obtained using $\mathrm{COCO}^*$ , while training on Open Images guarantees good performance if the real class is LAION, but bad performance if it is ImageNet. Additional results are included in the supplementary material. + +# 4.3 SoTA Comparison + +In our analysis we include only methods with code and/or pre-trained models publicly available on-line. Eventually, we included 7 CNN-based methods [7,10, 43, 71, 72, 74, 75], 2 CLIP-based methods [51, 67] and a training-free method [60]. A brief summary of these techniques is provided in Tab.1, while a more detailed description is given in the supplementary material. For a fair comparison we avoid testing on ProGAN [36] and Latent Diffusion [61], because a good number of these supervised methods were trained on datasets that include images from these generators. Even so, we have a total of 30 datasets for testing. Results are reported in Tab.2 in terms of AUC, with the best figure for each dataset highlighted in bold. Note that each row is characterized by the name of the generator (e.g., GauGAN) and by a single letter that recalls the set of real images used to train it: S for LSUN, F for FFHQ, I for ImageNet, C for COCO, L for LAION, R for RAISE. This detail allows us to study how the performance depends on the real dataset (but with synthetic images from the same generator and with semantic content aligned with real images). + +First of all, we observe that for most reference methods the average AUC does not exceed $80\%$ . Notable exceptions are the CLIP-based Ojha2023 (88.4%) and the CNN-based Corvi2023 (89.4%). Interestingly, some methods show very different performance when the real class changes. This may be due to JPEG bias as already suggested in [28, 60]. A deeper analysis on this point is presented + +Table 2: AUC for reference and proposed methods. Best score in bold with a $0.5\%$ margin. S = LSUN, F = FFHQ, I = ImageNet, C = COCO, L = LAION, R = RAISE. + +
Real dataWang2020PatchFor.Lin2022Corvi2023LGradDIREDEFAKEOjha2023NPRAEROBLADEOurs \( {D}^{\left( 0\right) } \)Ours \( {\left| D\right| }^{\left( 0\right) } \)\( {\Delta }^{u1} \)\( {\Delta }^{u1} \)
C98.980.899.783.881.699.943.8100.89.155.199.899.899.999.999.799.799.799.799.799.799.799.799.7
GauGANC92.785.594.783.477.299.859.059.099.686.851.992.388.695.992.388.695.992.692.692.699.799.799.7
BigGANI94.7100.99.995.973.940.445.999.781.584.0100.100.100.100.100.100.100.100.100.100.100.100.100.
StarGANF98.183.899.789.199.858.339.196.7100.30.096.696.196.796.796.796.796.796.796.596.596.596.5
StyleGAN2S94.985.199.958.482.755.547.691.071.360.143.187.741.188.787.741.188.787.787.787.787.787.7
F
GigaGANI73.761.097.350.576.499.964.394.682.447.572.468.172.468.172.468.172.468.168.168.168.168.1
C79.584.099.690.976.799.987.997.695.580.696.594.094.096.797.396.797.396.797.396.797.396.7
Diff.GANS89.892.699.596.699.549.844.897.4100.43.999.499.499.499.499.499.499.499.599.599.599.599.5
GALIPC89.798.294.387.756.7100.75.698.690.765.098.496.399.799.799.799.799.799.799.799.799.799.7
DALL-EL66.471.795.098.395.299.855.997.399.524.199.295.898.298.298.298.298.298.298.298.298.298.2
DDPMF31.698.422.8100.9.823.150.577.792.481.776.625.293.879.676.625.293.879.679.679.679.679.6
ADMS67.667.670.680.381.152.037.488.294.153.149.553.569.463.159.563.169.463.169.463.171.071.0
I61.081.994.481.172.799.569.185.378.580.387.890.595.395.395.395.395.395.395.392.192.192.1
GLIDEC64.897.496.397.281.599.992.488.895.498.047.888.588.588.588.588.588.588.588.588.588.588.5
R32.295.056.686.550.642.992.272.863.387.723.289.451.165.165.165.165.165.165.165.165.165.1
L72.674.190.886.990.3100.60.295.399.868.754.584.284.284.284.284.284.284.284.284.284.284.2
DiTI58.683.188.0100.56.299.687.477.878.499.889.484.384.384.384.384.384.384.384.384.384.384.3
Stable D. 1.4C68.286.195.3100.54.799.993.397.976.599.848.474.854.674.854.654.654.654.654.654.671.471.4
R37.961.873.4100.50.037.688.087.743.096.999.499.498.798.797.097.097.097.097.097.097.297.2
Stable D. 2C56.578.694.2100.62.899.397.982.389.399.983.090.384.584.584.584.584.584.584.584.584.584.5
R50.238.734.8100.41.435.580.789.544.097.498.596.895.895.895.895.895.895.895.895.895.895.8
SDXLC83.860.889.3100.89.399.594.080.099.387.999.999.999.999.999.999.999.999.999.999.999.999.9
R54.368.431.1100.57.247.184.485.176.769.7100.100.100.100.100.100.99.199.299.299.299.299.2
Deep.-IFC78.062.772.299.968.898.996.992.991.681.991.782.388.488.488.488.488.488.488.488.479.479.4
DALL-E 2C88.552.498.988.278.699.980.697.190.059.3100.100.100.100.100.100.100.100.100.99.999.9
R64.841.970.469.458.644.770.995.239.532.8100.100.100.100.100.100.100.100.100.100.100.
DALL-E 3C65.047.399.5100.88.499.996.286.497.799.799.799.799.598.398.398.398.398.398.398.2
R10.952.70.260.837.947.692.436.448.748.379.166.778.078.178.178.178.178.178.178.1
MidjourneyR40.257.840.7100.56.351.078.166.277.099.099.799.398.598.598.598.598.598.598.598.5
Adobe FireflyR84.849.411.898.040.657.481.497.532.152.873.641.280.880.4
AVG68.373.377.089.468.274.672.988.480.171.283.386.488.888.890.0
+ +in the supplementary material. The proposed zero-shot approach goes above $80\%$ with all decision statistics, reaching the top value of $90.0\%$ when $|\varDelta^{01}|$ is used. Obviously, this is a very good result, but what makes it especially valuable is the absence of any dependence on the generators' models. This point is further stressed by the fact that the AUC remains extremely stable across all test sets, with a minimum of $65.1\%$ on GLIDE-R. On the contrary, the best competitor, Corvi2023, has a long score of top results but also some very poor ones. suggesting a certain instability, likely due to the presence/absence of specific artifacts in the test images, and eventually the risk of not adapting to models of new conception. We want also to draw the reader's attention on the already mentioned case of GLIDE and on the fact that the proposed method exhibits wildly different results with different decision statistics. In particular, with $|D^{(0)}|$ the AUC is $89.4\%$ as opposed to the already mentioned $65.1\%$ with $|\varDelta^{01}|$ . This suggests there may be better ways to exploit the basic $\mathrm{NLL}^{(l)}$ and $H^{(l)}$ , possibly jointly at all levels, to synthesize a better and more stable decision statistics. + +Finally, in Fig.7, we report the accuracy as a function of the decision threshold for the best methods. A separate curve is shown for each real image dataset by + +![](images/4e21d0031c615837cb8a5a64ab07a2ce4aa27497ada7559bbcb459410c5ad7c3.jpg) +Fig. 7: Balanced accuracy as a function of the detection threshold. For each dataset of real images, we average accuracy over all associated synthetic generators. The dotted vertical line indicates the global optimal threshold and the $\times$ symbol the corresponding accuracy. Note that only for the proposed method all peaks are very close, indicating the presence of a single threshold. Charts for other methods are reported in the Suppl. + +![](images/0e71efdcbd0b70bafef580e3faab45897e7b5dda058041bfe72d36967d5b3a51.jpg) + +![](images/4ee059c975ba3f9c60be3ad22d7d0186c303ed6ae9a73cf9cec89ad584c1662e.jpg) + +![](images/c16b149cb21ba2a87c32a0e78158ea1361ad650288d9756676f79ea09e97e8e5.jpg) + +![](images/8b60c82f1664d71b2d47eca0399985d53656884a7ba43874bcc073cab300070c.jpg) + +averaging over the associated synthetic generators. Unlike AUC, the accuracy critically depends on the selection of a good threshold and some calibration data may be needed for this purpose. Note that only for the proposed method there is a single good threshold that ensures near-optimal accuracy for all datasets. + +# 4.4 Limitations + +Our work was developed to detect whether an image has been fully generated and not to detect local manipulations. However, it could be easily extended to accomplish this task since we already compute a map of local pixel-wise statistics. Furthermore, our approach relies on a model of the real class learned by the encoder. If real images do not satisfy this model, the approach may not perform correctly. For example, if images are highly compressed or resized (as is the case on the web), statistical analysis may not be reliable. + +# 5 Conclusion + +We introduced a novel zero-shot forensic detector to distinguish AI-generated images from real ones. Unlike most current methods, our approach does not require fake images during training, which ensures generalization to yet unknown generative models. The idea is to exploit an implicit model of real images and classify off-model images as synthetic. To this end, we leverage an appropriate lossless encoder, trained only on real images, that can predict the probability distribution of each pixel given its context. Synthetic images are expected to not respect this distribution, thus revealing their artificial nature. Our experiments show that the proposed detector is consistently competitive with detectors trained in supervised modality, and outperforms them in terms of generalization ability. We believe that our approach is an important stepping stone towards effective forensic tools that can operate without relying on domain- or method-specific training data. Future work will focus on making the method robust to the most common forms of image impairment, so as to make it suitable for in the wild application. + +Acknowledgments. We gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Gift. This material is also based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. In addition, this work has received funding by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093. + +# References + +1. Albright, M., McCloskey, S.: Source Generator Attribution via Inversion. In: CVPR Workshop. pp. 96-103 (2019) +2. Amoroso, R., Morelli, D., Cornia, M., Baraldi, L., Del Bimbo, A., Cucchiara, R.: Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images. ACM Trans. Multimedia Comput. Commun. Appl. (2024) +3. Bammey, Q.: Synthbuster: Towards Detection of Diffusion Model Generated Images. IEEE Open Journal of Signal Processing (2023) +4. Boháček, M., Farid, H.: A geometric and photometric exploration of GAN and Diffusion synthesized faces. In: CVPR Workshop. pp. 874--883 (2023) +5. Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis. In: ICLR (2018) +6. Cao, S., Wu, C.Y., Krahenbuhl, P.: Lossless Image Compression through SuperResolution. arXiv preprint arXiv:2004.02872v1 (2020) +7. Chai, L., Bau, D., Lim, S.N., Isola, P.: What Makes Fake Images Detectable? Understanding Properties that Generalize. In: ECCV. pp. 103-120 (2020) +8. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: CVPR. pp. 8789-8797 (2018) +9. Corvi, R., Cozzolino, D., Poggi, G., Nagano, K., Verdoliva, L.: Intriguing properties of synthetic images: from generative adversarial networks to diffusion models. In: CVPR Workshop. pp. 973-982 (2023) +0. Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K., Verdoliva, L.: On the detection of synthetic images generated by diffusion models. In: ICASSP. pp. 1-5 (2023) +1. Cozzolino, D., Poggi, G., Corvi, R., Nießner, M., Verdoliva, L.: Raising the Bar of AI-generated Image Detection with CLIP. In: CVPR Workshop. pp. 4356-4366 (2024) +12. Cozzolino, D., Thies, J., Rössler, A., Riess, C., Nießner, M., Verdoliva, L.: Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510 (2018) +13. Dang-Nguyen, D.T., Pasquini, C., Conotter, V., Boato, G.: RAISE: A Raw Images Dataset for Digital Image Forensics. In: ACM MMSys. p. 219-224 (2015) + +14. Dayma, B., Patil, S., Cuenca, P., Saifullah, K., Abraham, T., Lé Khac, P., Melas, L., Ghosh, R.: DALL-E Mini (2021). https://doi.org/10.5281/zenodo.5146400, https://github.com/borisdayma/dalle-mini +15. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR. pp. 248-255 (2009) +16. Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. NeurIPS 34, 8780-8794 (2021) +17. Du, M., Pentyala, S., Li, Y., Hu, X.: Towards Generalizable Deepfake Detection with Locality-Aware AutoEncoder. In: CIKM. pp. 325--334 (2020) +18. Durall, R., Keuper, M., Keuper, J.: Watch Your Up-Convolution: CNN Based Generative Deep Neural Networks Are Failing to Reproduce Spectral Distributions. In: CVPR. pp. 7890-7899 (2020) +19. Epstein, D.C., Jain, I., Wang, O., Zhang, R.: Online Detection of AI-Generated Images. In: ICCV Workshop. pp. 382-392 (2023) +20. Epstein, Z., Hertzmann, A., Herman, L., Mahari, R., Frank, M.R., Groh, M., Schroeder, H., Akten, A.S.M., Fjeld, J., Farid, H., Leach, N., Pentland, A.S., Russakovsky, O.: Art and the science of generative AI: A deeper dive. arXiv preprint arXiv:2306.04141 (2023) +21. Farid, H.: Lighting (in) consistency of paint by text. arXiv preprint arXiv:2207.13744 (2022) +22. Farid, H.: Perspective (in) consistency of paint by text. arXiv preprint arXiv:2206.14617 (2022) +23. Firefly, A.: https://www.adobe.com/sensei/generative-ai/firefly.html (2023) +24. Frank, J., Eisenhofer, T., Schonherr, L., Fischer, A., Kolossa, D., Holz, T.: Leveraging Frequency Analysis for Deep Fake Image Recognition. In: ICML. pp. 3247-3258 (2020) +25. Gehrmann, S., Strobelt, H., Rush, A.M.: GLTR: Statistical detection and visualization of generated text. In: 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. pp. 111-116 (2019) +26. Ghosal, S.S., Chakraborty, S., Geiping, J., Huang, F., Manocha, D., Bedi, A.S.: Towards possibilities & impossibilities of AI-generated text detection: A survey. arXiv preprint arXiv:2310.15264 (2023) +27. Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdolina, L.: Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In: ICME. pp. 1-6 (2021) +28. Grommelt, P., Weiss, L., Pfreundt, F.J., Keuper, J.: Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets. arXiv preprint arXiv:2403.17608 (2024) +29. Hans, A., Schwarzschild, A., Cherepanova, V., Kazemi, H., Saha, A., Goldblum, M., Geiping, J., Goldstein, T.: Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text. In: ICML (2024) +30. He, Z., Chen, P.Y., Ho, T.Y.: RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection. arXiv preprint arXiv:2405.20112 (2024) +31. Heikkilä, M.: This artist is dominating AI-generated art. and he's not happy about it. MIT Technology Review (2022) +32. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. NeurIPS 33, 6840-6851 (2020) +33. Jeon, H., Bang, Y.O., Kim, J., Woo, S.: T-GD: Transferable GAN-generated Images Detection Framework. In: ICML. vol. 119, pp. 4746-4761 (2020) + +34. Jeong, Y., Kim, D., Ro, Y., Kim, P., Choi, J.: Fingerprint Net: Synthesized Fingerprints for Generated Image Detection. In: ECCV. pp. 76-94 (2022) +35. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: CVPR. pp. 10124-10134 (2023) +36. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. In: ICLR (2018) +37. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR. pp. 4401-4410 (2019) +38. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: CVPR. pp. 8110-8119 (2020) +39. Konstantinov, M., Shonenkov, A., Bakshandaeva, D., Schuhmann, C., Ivanova, K., Klokova, N.: https://www deepfloyd.ai/deepfloyd-if (2023) +40. Krasin, I., Duerig, T., Alldrin, N., Ferrari, V., Abu-El-Haija, S., Kuznetsova, A., Rom, H., Uijlings, J., Popov, S., Veit, A., et al.: OpenImages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages (2017) +41. Lin, L., Gupta, N., Zhang, Y., Ren, H., Liu, C.H., Ding, F., Wang, X., Li, X., Verdoliva, L., Hu, S.: Detecting multimedia generated by large ai models: A survey. arXiv preprint arXiv:2204.06125 (2024) +42. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. pp. 740-755 (2014) +43. Liu, B., Yang, F., Bi, X., Xiao, B., Li, W., Gao, X.: Detecting generated images by real images. In: ECCV. pp. 95-110 (2022) +44. Liu, H., Tan, Z., Tan, C., Wei, Y., Wang, J., Zhao, Y.: Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection. In: CVPR. pp. 10770-10780 (2024) +45. Mahajan, S., Roth, S.: PixelPyramids: Exact Inference Models from Lossless Image Pyramids. In: ICCV. pp. 6639-6648 (2021) +46. Mandelli, S., Bonettini, N., Bestagini, P., Tubaro, S.: Detecting GAN-generated Images by Orthogonal Training of Multiple CNNs. In: ICIP. pp. 3091-3095 (2022) +47. Marra, F., Saltori, C., Boato, G., Verdoliva, L.: Incremental learning for the detection and classification of GAN-generated images. In: WIFS. pp. 1-6 (2019) +48. Midjourney: https://www.midjourney.com/home (2023) +49. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C.D., Finn, C.: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. In: ICML. pp. 24950-24962 (2023) +50. Nichol, A.Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., Chen, M.: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diff. Models. In: ICML. pp. 16784-16804 (2022) +51. Ojha, U., Li, Y., Lee, Y.J.: Towards universal fake image detectors that generalize across generative models. In: CVPR. pp. 24480-24489 (2023) +52. OpenAI: https://openai.com/dall-e-3 (2023) +53. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR. pp. 2337-2346 (2019) +54. Peebles, W., Xie, S.: Scalable diffusion models with transformers. In: ICCV. pp. 4195-4205 (2023) +55. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: SDXL: Improving latent diffusion models for high-resolution image synthesis. In: ICLR (2024) + +56. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML. pp. 8748-8763 (2021) +57. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022) +58. Reed, S.E., van den Oord, A., Kalchbrenner, N., Colmenarejo, S.G., Wang, Z., Chen, Y., Belov, D., de Freitas, N.: Parallel multiscale autoregressive density estimation. In: ICML. pp. 2912-2921 (2017) +59. Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the detection of diffusion model deepfakes. In: VISAPP. pp. 446-457 (2024) +60. Ricker, J., Lukovnikov, D., Fischer, A.: AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error. In: CVPR. pp. 9130-9140 (2024) +61. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR. pp. 10684-10695 (2022) +62. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/CompVis/stable-diffusion (2022) +63. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/Stability-AI/stablediffusion (2022) +64. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M.: Faceforensics++: Learning to detect manipulated facial images. In: ICCV. pp. 1-11 (2019) +65. Sarkar, A., Mai, H., Mahapatra, A., Lazebnik, S., Forsyth, D.A., Bhattad, A.: Shadows Don't Lie and Lines Can't Bend! Generative Models don't know Projective Geometry... for now. In: CVPR. pp. 28140-28149 (2024) +66. Schuhmann, C., Kaczmarczyk, R., Komatsuzaki, A., Katta, A., Vencu, R., Beaumont, R., Jitsev, J., Coombes, T., Mullis, C.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In: NeurIPS (2021) +67. Sha, Z., Li, Z., Yu, N., Zhang, Y.: DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. In: ACM SIGSAC. pp. 3418-3432 (2023) +68. Sinitsa, S., Fried, O.: Deep Image Fingerprint: Towards Low Budget Synthetic Image Detection and Model Lineage Analysis. In: WACV. pp. 4067-4076 (2024) +69. Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J.W., Kreps, S., et al.: Release Strategies and the Social Impacts of Language Models. arXiv preprint arXiv:1908.09203 (2019) +70. Su, J., Zhuo, T.Y., Wang, D., Nakov, P.: DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text. In: Conference on Empirical Methods in Natural Language Processing (2023) +71. Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., Wei, Y.: Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection. In: CVPR. pp. 28130-28139 (2024) +72. Tan, C., Zhao, Y., Wei, S., Gu, G., Wei, Y.: Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection. In: CVPR. pp. 12105-12114 (2023) +73. Tao, M., Bao, B.K., Tang, H., Xu, C.: Galip: Generative adversarial clips for text-to-image synthesis. In: CVPR. pp. 14214-14223 (2023) +74. Wang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now. In: CVPR. pp. 8692-8701 (2020) + +75. Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. ICCV pp. 22445-22455 (2023) +76. Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. In: ICLR (2023) +77. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015) +78. Zhang, X., Karaman, S., Chang, S.F.: Detecting and Simulating Artifacts in GAN Fake Images. In: WIFS. pp. 1-6 (2019) +79. Zhong, N., Xu, Y., Qian, Z., Zhang, X.: Rich and Poor Texture Contrast: A Simple yet Effective Approach for AI-generated Image Detection. arXiv preprint arXiv:2311.12397v1 (2023) \ No newline at end of file diff --git a/2024/Zero-Shot Detection of AI-Generated Images/images.zip b/2024/Zero-Shot Detection of AI-Generated Images/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..01552636c7920e297c8c490c500aa07267a88460 --- /dev/null +++ b/2024/Zero-Shot Detection of AI-Generated Images/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b6258ec026e81b040ea2cee9405b062b568ed5c215de198a1480fa38a4264b1 +size 610939 diff --git a/2024/Zero-Shot Detection of AI-Generated Images/layout.json b/2024/Zero-Shot Detection of AI-Generated Images/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e057782f0583805b19a894273a4e54cc333a6f30 --- /dev/null +++ b/2024/Zero-Shot Detection of AI-Generated Images/layout.json @@ -0,0 +1,9920 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 149, + 112, + 464, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 112, + 464, + 129 + ], + "spans": [ + { + "bbox": [ + 149, + 112, + 464, + 129 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "spans": [ + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "text", + "content": "Davide Cozzolino" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "text", + "content": ", Giovanni Poggi" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "text", + "content": ", Matthias Nießner" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "text", + "content": ", and Luisa Verdoliva" + }, + { + "bbox": [ + 160, + 149, + 453, + 174 + ], + "type": "inline_equation", + "content": "^{1,2}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 195, + 183, + 418, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 183, + 418, + 196 + ], + "spans": [ + { + "bbox": [ + 195, + 183, + 418, + 196 + ], + "type": "text", + "content": "1 University Federico II of Naples, 80125 Naples, Italy" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 166, + 196, + 447, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 196, + 447, + 217 + ], + "spans": [ + { + "bbox": [ + 166, + 196, + 447, + 217 + ], + "type": "text", + "content": "2 Technical University of Munich, 85748 Garching, Germany {davide.cozzolino, poggi, verdoliv}@unina.it, niessner@tum.de" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 160, + 240, + 455, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 240, + 455, + 483 + ], + "spans": [ + { + "bbox": [ + 160, + 240, + 455, + 483 + ], + "type": "text", + "content": "Abstract. Detecting AI-generated images has become an extraordinarily difficult challenge as new generative architectures emerge on a daily basis with more and more capabilities and unprecedented realism. New versions of many commercial tools, such as DALL-E, Midjourney, and Stable Diffusion, have been released recently, and it is impractical to continually update and retrain supervised forensic detectors to handle such a large variety of models. To address this challenge, we propose a zero-shot entropy-based detector (ZED) that neither needs AI-generated training data nor relies on knowledge of generative architectures to artificially synthesize their artifacts. Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images. To this end, we rely on a lossless image encoder that estimates the probability distribution of each pixel given its context. To ensure computational efficiency, the encoder has a multi-resolution architecture and contexts comprise mostly pixels of the lower-resolution version of the image. Since only real images are needed to learn the model, the detector is independent of generator architectures and synthetic training data. Using a single discriminative feature, the proposed detector achieves state-of-the-art performance. On a wide variety of generative models it achieves an average improvement of more than " + }, + { + "bbox": [ + 160, + 240, + 455, + 483 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 160, + 240, + 455, + 483 + ], + "type": "text", + "content": " over the SoTA in terms of accuracy. Code is available at https://grip-unina.github.io/ZED/." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 499, + 230, + 511 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 499, + 230, + 511 + ], + "spans": [ + { + "bbox": [ + 132, + 499, + 230, + 511 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 521, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 521, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 521, + 482, + 666 + ], + "type": "text", + "content": "The quality of AI-generated images has improved tremendously in recent years, to the point where they are virtually indistinguishable from real images upon visual inspection. In addition, the latest generators are widely available online and allow easy creation and retouching of images based on simple textual prompts. All this opens the way to endless application opportunities in a variety of fields, from the creative arts to industries of all kinds. However, on the flip side, such tools can be also used for malicious purposes, thus posing serious threats to our society. For example, pre-trained generators can be easily optimized to generate fake works by a specific artist [31], or used to orchestrate effective, large-scale disinformation campaigns to influence public opinion in advanced democracies [20]. These immediate risks create an urgent need for reliable and automated detection of AI-generated images [41]." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 137, + 116, + 358, + 203 + ], + "blocks": [ + { + "bbox": [ + 137, + 116, + 358, + 203 + ], + "lines": [ + { + "bbox": [ + 137, + 116, + 358, + 203 + ], + "spans": [ + { + "bbox": [ + 137, + 116, + 358, + 203 + ], + "type": "image", + "image_path": "1861bae1c211181a4ebb9c70feb93a8a2ecf71a22074b8febb69e2f5c4f61f21.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 214, + 482, + 291 + ], + "lines": [ + { + "bbox": [ + 130, + 214, + 482, + 291 + ], + "spans": [ + { + "bbox": [ + 130, + 214, + 482, + 291 + ], + "type": "text", + "content": "Fig. 1: ZED leverages the intrinsic model of real images learned by a state-of-the-art lossless image coder. For real images, the model is correct and the actual coding cost is close its expected value. Synthetic images have different statistics than real images, so they \"surprise\" the encoder, and the actual coding cost differs significantly from its expected value. This is evident from the graphic on the right that shows how the coding cost gap increases for synthetic images much more than for real ones when predicting high resolution details from low resolution data." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 377, + 115, + 478, + 205 + ], + "blocks": [ + { + "bbox": [ + 377, + 115, + 478, + 205 + ], + "lines": [ + { + "bbox": [ + 377, + 115, + 478, + 205 + ], + "spans": [ + { + "bbox": [ + 377, + 115, + 478, + 205 + ], + "type": "image", + "image_path": "7df157d8f47b6ef8c4e992f84e6981c61fe476db9c268abc7921986a937978cf.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 304, + 482, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 304, + 482, + 532 + ], + "spans": [ + { + "bbox": [ + 130, + 304, + 482, + 532 + ], + "type": "text", + "content": "Until very recently, supervised learning paradigms dominated the image forensics community, with deep models trained on large datasets of real and fake images [64]. These approaches, however, are tailored to specific domains and are difficult to generalize to unseen deepfake samples. In the seminal paper by Wang et al. [74], it is shown that a simple detector trained only on ProGAN images from 20 different categories generalizes well to other images created by different generative adversarial networks (GAN) thanks to suitable augmentation. However, performance still suffers on images generated by prompt-driven diffusion models (DM). Similarly, a detector suitably trained on Latent DM images performs well on all other DM images but fails to generalize properly on GAN images [10]. To reduce the dependence on training data, recent works [2, 11, 51, 67] rely on general-purpose features extracted by pre-trained visual-language models, such as CLIP (Contrastive Language-Image Pre-Training) [56]. Despite the good performance, these methods still depend on the choice of the training dataset. A recent trend to improve generalization is based on few-shot methods [12, 17, 33] which can partially solve the problem, but still require some prior knowledge of the target models, even if limited to a few images. With this work we make a step further and develop an approach that is not influenced at all by newer and previously unseen generative models." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 533, + 483, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 483, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 483, + 666 + ], + "type": "text", + "content": "To this end, we propose a zero-shot detection method that only requires real images for learning their underlying distribution. Our key idea is to use lossless coding and a multi-resolution prediction strategy for computing conditional distributions of all image pixels at three different levels of resolution. Given such distributions, we compute statistics related to the actual and expected coding cost. If the image is coherent with the predicted distribution (no surprise), then there is no mismatch and the image under analysis is labelled as real. We expect synthetic images to be characterized by a higher coding cost under the distribution of real images (see Fig. 1). Based on this intuition, we design discriminative features that measure how well the image under test fits the model of real images embedded in the encoder. Even by using a single feature, we can obtain" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "spans": [ + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "type": "text", + "content": "significant performance above " + }, + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "type": "text", + "content": " in terms of AUC for several recent models, such as DALL·E, Midjourney, and SDXL." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 146, + 140, + 443, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 140, + 443, + 152 + ], + "spans": [ + { + "bbox": [ + 146, + 140, + 443, + 152 + ], + "type": "text", + "content": "In summary, the main contributions of this paper are the following:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 163, + 479, + 258 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 138, + 163, + 479, + 197 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 163, + 479, + 197 + ], + "spans": [ + { + "bbox": [ + 138, + 163, + 479, + 197 + ], + "type": "text", + "content": "- we propose a zero-shot detector of artificially generated images: no fake images are necessary for training which guarantees independence from any specific generation method;" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 199, + 479, + 221 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 199, + 479, + 221 + ], + "spans": [ + { + "bbox": [ + 138, + 199, + 479, + 221 + ], + "type": "text", + "content": "- this is the first work that exploits an implicit model of real images, learnt for lossless encoding to address image forensics task;" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 224, + 479, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 224, + 479, + 258 + ], + "spans": [ + { + "bbox": [ + 138, + 224, + 479, + 258 + ], + "type": "text", + "content": "- our experiments show on a wide variety of generative models that even using a single feature the proposed detector provides state-of-the-art results " + }, + { + "bbox": [ + 138, + 224, + 479, + 258 + ], + "type": "inline_equation", + "content": "(+3.4\\%" + }, + { + "bbox": [ + 138, + 224, + 479, + 258 + ], + "type": "text", + "content": " in terms of accuracy)." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 131, + 281, + 233, + 293 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 281, + 233, + 293 + ], + "spans": [ + { + "bbox": [ + 131, + 281, + 233, + 293 + ], + "type": "text", + "content": "2 Related work" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 300, + 481, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 300, + 481, + 467 + ], + "spans": [ + { + "bbox": [ + 130, + 300, + 481, + 467 + ], + "type": "text", + "content": "Supervised learning. The problem of distinguishing synthetic images from real ones is commonly formulated as a binary classification task. State-of-the-art methods explicitly or implicitly exploit forensic artifacts by leveraging a large amount of real and generated images. Some of them rely on semantic flaws, such as face asymmetries [4] or incorrect perspective, lighting, shadows [21, 22, 65]. However, technology advances very quickly and such errors will very likely disappear in next-generation tools. Therefore, most methods focus on low-level and inconspicuous artifacts [9, 18]. Major efforts have been made to prevent conventional supervised detectors from overfitting the training data. Popular recipes include using datasets as varied as possible with intense augmentation [74], pre-training models on large general-purpose datasets [46], preserving fine-grain details of images [7, 27], exploiting high-frequency artifacts in the spatial [43, 68, 72] or Fourier domain [18, 24, 78], leveraging inter-pixel correlation discrepancies [71, 79], adopting inversion techniques [1, 75]." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 468, + 481, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 468, + 481, + 575 + ], + "spans": [ + { + "bbox": [ + 130, + 468, + 481, + 575 + ], + "type": "text", + "content": "With the advent of diffusion models that presents significant architectural differences with GANs, the importance to design methods that work equally well on known and unknown sources became even more evident [10]. An important finding was the increased generalization that could be achieved using pre-trained large vision-language models, such as CLIP-ViT [51]. In this case only a lightweight linear classifier is trained on top of these features to adapt to the forensic task. Very good performance is obtained on DMs even if the network was trained only on GANs. Other methods also show the potential of such approach [2, 11, 59], sometimes including multimodal features [44, 67]." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 576, + 481, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 576, + 481, + 672 + ], + "spans": [ + { + "bbox": [ + 130, + 576, + 481, + 672 + ], + "type": "text", + "content": "Some supervised methods assume to have only real images available and create the synthetic images needed for training by simulating the artifacts introduced by a generator, for example by passing real images through an autoencoder [24,34,78]. The more generative architectures are simulated, the more effective is the detector. Of course, the performance degrades on images generated by an architecture not considered in the simulation phase. Differently from all these methods our approach does not require collecting or generating synthetic images thus avoiding any type of dependence on this class." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "content": "Few-shot/incremental learning. A significant step towards improved generalization is the use of few-shot or incremental learning strategies [12, 17, 33, 47]. Along this path, a recent work [19] proposes to regularly re-train a detector on new synthetic generators in the very same temporal order of their release, as in a real-world scenario. Results show a good generalization to unseen models, but only as long as the architecture of new generators is similar to that of old ones. Although few-shot methods represent an important progress in reducing the dependence on training data, the ultimate goal is to remove this dependence entirely to ensure maximum generalization. In pursuit of this goal, in this work we propose a truly zero-shot detector." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 246, + 482, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 246, + 482, + 486 + ], + "spans": [ + { + "bbox": [ + 130, + 246, + 482, + 486 + ], + "type": "text", + "content": "Zero-shot learning. Only a few very recent papers avoid training on synthetic data altogether. A solution was proposed in [60] based on the observation that synthetic images are reconstructed more accurately than real images by a latent DM autoencoder. The main limitation is that the method only reliably detects images generated by latent diffusion models. The method in [30], instead, exploits the fact that small perturbations of [real/synthetic] images correspond to [small/large] variations in the embedding space of a pre-trained large model. Differently from these strategies our work takes inspiration from some interesting proposals that have recently appeared for synthetic text detection [25,29,49,69]. They exploit the fact that LLMs (Large Language Models) work by generating the probability distribution of the next token given the previous ones. In the generation phase, new tokens are sequentially added to a sentence based on these distributions. In the analysis phase, one can replicate the process for a given sentence under test and measure how well the actual tokens match the predicted ones. A good match suggests that the sentence was indeed generated by an LLM. Although inspired by these methods, our zero-shot synthetic image detector differs from them because it leverages a model of real images and does not depend in any way on synthetic data or generators. Moreover, to build the model we take advantage of the remarkable field-proved ability of lossless encoders to accurately describe pixels based on their context." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 504, + 202, + 517 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 504, + 202, + 517 + ], + "spans": [ + { + "bbox": [ + 132, + 504, + 202, + 517 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 530, + 222, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 530, + 222, + 544 + ], + "spans": [ + { + "bbox": [ + 132, + 530, + 222, + 544 + ], + "type": "text", + "content": "3.1 Background" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "spans": [ + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "type": "text", + "content": "Here we provide some background on zero-shot methods that leverage large pre-trained language models for machine-generated text detection. They exploit the native functionality of these models to provide next-token predictions [29]. Before a string of characters " + }, + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "type": "text", + "content": " can be processed by a language model, it must be parsed into a sequence of tokens (mostly words). The tokenizer " + }, + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 130, + 551, + 482, + 621 + ], + "type": "text", + "content": " outputs a list of indices" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 252, + 623, + 481, + 636 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 623, + 481, + 636 + ], + "spans": [ + { + "bbox": [ + 252, + 623, + 481, + 636 + ], + "type": "interline_equation", + "content": "T: s \\rightarrow \\left\\{x _ {0}, x _ {1}, \\dots , x _ {L} \\right\\}, \\tag {1}", + "image_path": "fd114bb685f23b20ec3c62ca7faf325728d04cff2b5d6d006e76c94893a5b244.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "inline_equation", + "content": "x_{i} \\in \\{1, \\dots, n\\}" + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": " is the index of the " + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": "-th token of the sequence, addressing a size- " + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": " vocabulary of tokens. The language model operates by predicting the next" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 231, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 231, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 231, + 102 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "content": "index-token given the list of previous ones, thereby allowing for the generation of a full sentence given just a short prompt. Actually, language models output more information than just the index of the most likely token. Given the list of previous indices " + }, + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "inline_equation", + "content": "X_{i} = \\{x_{0},\\ldots ,x_{i - 1}\\}" + }, + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "content": ", they provide the probability of all possible values of the current one, that is, " + }, + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "inline_equation", + "content": "P(x_{i} = k|X_{i})" + }, + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "content": ", for " + }, + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "inline_equation", + "content": "k = 1,\\dots ,n" + }, + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 176, + 479, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 176, + 479, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 176, + 479, + 236 + ], + "type": "text", + "content": "The idea is to exploit this functionality to measure the conformity of the string under analysis to the LLM intrinsic model of language. That is, these methods try to answer the question \"How likely is it that this sentence was generated by my LLM?\" Hence they compute (for free) the likelihood of the given list of indices under the probability distribution learned by the LLM" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 245, + 479, + 289 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 245, + 479, + 289 + ], + "spans": [ + { + "bbox": [ + 132, + 245, + 479, + 289 + ], + "type": "interline_equation", + "content": "P \\left(x _ {0}, \\dots , x _ {L}\\right) = P \\left(x _ {0}\\right) \\cdot P \\left(x _ {1} \\mid x _ {0}\\right) \\cdot \\dots \\cdot P \\left(x _ {L} \\mid x _ {0}, \\dots , x _ {L - 1}\\right) = P \\left(x _ {0}\\right) \\prod_ {i = 1} ^ {L} P \\left(x _ {i} \\mid X _ {i}\\right) \\tag {2}", + "image_path": "426f966d50316e58bd8b94b27c75f1bfea194e4d9d5bf576afdef11ca7b6946b.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 289, + 479, + 313 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 479, + 313 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 479, + 313 + ], + "type": "text", + "content": "In practice, the negative log-likelihood (also called log-perplexity) is computed instead, that is (neglecting " + }, + { + "bbox": [ + 130, + 289, + 479, + 313 + ], + "type": "inline_equation", + "content": "x_0" + }, + { + "bbox": [ + 130, + 289, + 479, + 313 + ], + "type": "text", + "content": ")" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 249, + 322, + 479, + 354 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 249, + 322, + 479, + 354 + ], + "spans": [ + { + "bbox": [ + 249, + 322, + 479, + 354 + ], + "type": "interline_equation", + "content": "\\mathrm {N L L} = - \\sum_ {i = 1} ^ {L} \\log P \\left(x _ {i} \\mid X _ {i}\\right) \\tag {3}", + "image_path": "0ad3460a0683002454f13b78c7a1bf16c9facdee31290b4e8525028b8c67bd50.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "spans": [ + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "type": "text", + "content": "If the " + }, + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "type": "text", + "content": "-th observed index " + }, + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "type": "inline_equation", + "content": "x_{i}" + }, + { + "bbox": [ + 130, + 363, + 480, + 447 + ], + "type": "text", + "content": " was very likely to come after the previous ones, namely, it is not surprising, its contribution to the NLL is close to 0. On the contrary, if it was unlikely to appear, given the previous ones (an anomaly) it impacts significantly on the NLL. Overall, a sequence with low NLL is likely to have been generated by the LLM, and will be therefore detected as synthetic. Of course, this basic description is only meant to convey the general concepts, the reader is referred to the literature [26] for more details." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 465, + 266, + 478 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 465, + 266, + 478 + ], + "spans": [ + { + "bbox": [ + 131, + 465, + 266, + 478 + ], + "type": "text", + "content": "3.2 From Text to Images" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 486, + 480, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 486, + 480, + 616 + ], + "spans": [ + { + "bbox": [ + 130, + 486, + 480, + 616 + ], + "type": "text", + "content": "When we try to translate the above concepts into the realm of images, we run into a big problem: the most effective and popular image generation engines do not provide anything similar to the next token distribution observed in the case of LLMs. Indeed, there exist some autoregressive synthesis methods [45,58] that could be adapted to this task, but their generation approach is very different from those of the most popular GAN- and DM-based methods. Therefore in this work we change perspective or, better said, we now assume the correct one-class perspective, and look for a model of real images, rather than synthetic ones. Armed with such a model, we will be able to decide whether a given image is unsurprising, therefore real, or somewhat anomalous, therefore synthetic, regardless of the specific generation model used to create it." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 617, + 480, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 480, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 480, + 666 + ], + "type": "text", + "content": "Now, the concepts of prediction, surprise, perplexity, along with information measure and entropy, are pervasive in the literature on image coding, part of information theory. Lossless image encoders typically include a predictor that, given a suitable context, estimates the value of the target pixel, and an entropy" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 479, + 99 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 479, + 99 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 479, + 99 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 177 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 177 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 177 + ], + "type": "text", + "content": "encoder that efficiently represents prediction errors. Indeed, by analyzing the recent literature in the field we managed to single out a tool that perfectly suits our needs, the Super-Resolution based lossless Compressor (SReC) proposed by Cao et al. [6], which provides a computationally lightweight tool for predicting the distribution of image pixels at multiple resolution." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 131, + 194, + 382, + 206 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 194, + 382, + 206 + ], + "spans": [ + { + "bbox": [ + 131, + 194, + 382, + 206 + ], + "type": "text", + "content": "3.3 Super-resolution based Lossless Compressor" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "spans": [ + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "text", + "content": "Here we present a high-level description of SReC, focusing only on the aspects more relevant for our purposes. The interested reader is referred to the original paper for details [6]. The general idea is to train a neural network to predict the current pixel, " + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "inline_equation", + "content": "x_{i,j}" + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "text", + "content": ", given a set of previously coded pixels, and encode the difference between the true pixel value and its prediction. However, this purely autoregressive formulation is highly impractical, as it implies long encoding/decoding times. Therefore, SReC uses a multi-resolution prediction strategy. A low-resolution version " + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "inline_equation", + "content": "y^{(1)}" + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "text", + "content": " of the original image " + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "inline_equation", + "content": "x^{(0)}" + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "text", + "content": " is built through " + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "inline_equation", + "content": "2\\times 2" + }, + { + "bbox": [ + 130, + 214, + 482, + 323 + ], + "type": "text", + "content": " average pooling, that is" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 205, + 331, + 481, + 360 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 331, + 481, + 360 + ], + "spans": [ + { + "bbox": [ + 205, + 331, + 481, + 360 + ], + "type": "interline_equation", + "content": "y _ {i, j} ^ {(1)} = \\frac {x _ {2 i , 2 j} ^ {(0)} + x _ {2 i + 1 , 2 j} ^ {(0)} + x _ {2 i , 2 j + 1} ^ {(0)} + x _ {2 i + 1 , 2 j + 1} ^ {(0)}}{4} \\tag {4}", + "image_path": "f7765849118eb0742d1b3b6a0d5d31cba2337774bc1f0df1b6429e1da5e5e258.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 365, + 482, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 365, + 482, + 427 + ], + "spans": [ + { + "bbox": [ + 130, + 365, + 482, + 427 + ], + "type": "text", + "content": "Then, each four-pixel group of the high-resolution image is predicted based only on the low-resolution image, independent of other groups at the same resolution level, allowing for parallel processing and high-speed encoding. Since the fourth pixel of a group is known, given the other three and the low resolution image, the conditional joint distribution of the group reads" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 144, + 434, + 481, + 470 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 144, + 434, + 481, + 470 + ], + "spans": [ + { + "bbox": [ + 144, + 434, + 481, + 470 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} P \\left(x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, x _ {2 i, 2 j + 1} ^ {(0)} \\mid Y _ {i, j} ^ {(1)}\\right) = P \\left(x _ {2 i, 2 j} ^ {(0)} \\mid Y _ {i, j} ^ {(1)}\\right) \\cdot P \\left(x _ {2 i + 1, 2 j} ^ {(0)} \\mid x _ {2 i, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}\\right) \\tag {5} \\\\ \\cdot P (x _ {2 i, 2 j + 1} ^ {(0)} | x _ {2 i, 2 j} ^ {(0)}, x _ {2 i + 1, 2 j} ^ {(0)}, Y _ {i, j} ^ {(1)}) \\\\ \\end{array}", + "image_path": "118d30bfbe1e1ce0c0ad61f923b712c268b4ad6484595b87293d1863382cb56d.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "spans": [ + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "inline_equation", + "content": "Y_{i,j}^{(1)}" + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "text", + "content": " is the relevant context in the lower resolution image, that is a receptive field centered on " + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "inline_equation", + "content": "y_{i,j}^{(1)}" + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "text", + "content": ". Each term in this factorization is estimated by a dedicated convolutional neural network (CNN). In particular, a parametric distribution is assumed, given by the mixture of " + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 130, + 480, + 482, + 534 + ], + "type": "text", + "content": " discrete logistic distributions," + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 232, + 543, + 481, + 576 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 232, + 543, + 481, + 576 + ], + "spans": [ + { + "bbox": [ + 232, + 543, + 481, + 576 + ], + "type": "interline_equation", + "content": "P (x | X) = \\sum_ {k = 1} ^ {K} w _ {k} \\operatorname {l o g i s t i c} \\left(x \\mid \\mu_ {k}, s _ {k}\\right) \\tag {6}", + "image_path": "be17b4ad6f3488b705dfbf30547ad9e3e4946ec448c397442e5d9a61d2d665a1.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "spans": [ + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "inline_equation", + "content": "\\mathrm{logistic}(x|\\mu, s) = \\sigma\\left(\\frac{x - \\mu + 0.5}{s}\\right) - \\sigma\\left(\\frac{x + \\mu + 0.5}{s}\\right)" + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": " is the difference of two sigmoid functions, with position parameter " + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": " and scale parameter " + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "inline_equation", + "content": "K = 10" + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": " is always assumed. The CNN takes the context " + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": " of the pixel of interest as input and outputs the weights of the mixture together with the position and scale parameters of all logistics. In turn, these parameters allow one to compute the desired distribution. This whole process is replicated on two more lower-resolution scales, for a total of four levels, the lowest resolution, an " + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "inline_equation", + "content": "8 \\times 8" + }, + { + "bbox": [ + 130, + 580, + 482, + 667 + ], + "type": "text", + "content": " subsampled \"prompt\"" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 161, + 118, + 471, + 281 + ], + "blocks": [ + { + "bbox": [ + 151, + 160, + 160, + 175 + ], + "lines": [ + { + "bbox": [ + 151, + 160, + 160, + 175 + ], + "spans": [ + { + "bbox": [ + 151, + 160, + 160, + 175 + ], + "type": "text", + "content": "Reale" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 161, + 118, + 471, + 281 + ], + "lines": [ + { + "bbox": [ + 161, + 118, + 471, + 281 + ], + "spans": [ + { + "bbox": [ + 161, + 118, + 471, + 281 + ], + "type": "image", + "image_path": "efc16c4e1e602383bd83a3a98e2d204a0bb468d420e4cb55038d4ab3ccbcebd8.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 289, + 482, + 356 + ], + "lines": [ + { + "bbox": [ + 130, + 289, + 482, + 356 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 482, + 356 + ], + "type": "text", + "content": "Fig. 2: NLL and Entropy. We compute the spatial distribution of NLL and Entropy at three resolutions. For real images (top) the paired maps are very similar at all scales: when the uncertainty on a pixel (entropy) grows, also the coding cost (NLL) does. Therefore, the NLL-Entropy difference maps are all very dark. For synthetic images (bottom) NLL and Entropy maps are not always similar, because the model is not correct, and hence the difference maps are brighter, especially the high-resolution map." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "spans": [ + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "type": "text", + "content": "image, coded in clear, and three higher resolution images, each one predicted from its lower resolution version. All networks are trained to minimize the cross entropy between the predicted model probability " + }, + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "type": "inline_equation", + "content": "P_{\\theta}(x)" + }, + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "type": "text", + "content": " and the empirical data distribution " + }, + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "type": "inline_equation", + "content": "P(x)" + }, + { + "bbox": [ + 130, + 365, + 482, + 425 + ], + "type": "text", + "content": " given by the training image dataset. We mention in passing that this loss is closely related to the log-perplexity considered for text synthesis." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 426, + 482, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 426, + 482, + 509 + ], + "spans": [ + { + "bbox": [ + 130, + 426, + 482, + 509 + ], + "type": "text", + "content": "To summarize, SReC provides us with a lightweight tool for computing conditional distributions of all image pixels at three different levels of resolution, and therefore to compute all kinds of statistics that can expose the mismatch between a test image and the learned model. Considering that SReC achieves state-of-the-art performance in lossless image compression, one can also argue that the learned model of real images is very accurate. Given this tool, we can now design a zero-shot detector of synthetic images." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 525, + 321, + 537 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 525, + 321, + 537 + ], + "spans": [ + { + "bbox": [ + 130, + 525, + 321, + 537 + ], + "type": "text", + "content": "3.4 Features and Decision Statistics" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "x \\in \\{0, \\ldots, 255\\}^{N \\times M \\times 3}" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": " be the image under test. In our multi-resolution framework, this will be the highest-resolution version, " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "x^{(0)} = x" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ". Through " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "2 \\times 2" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": " average pooling, we generate a lower resolution version " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "y^{(1)} = \\mathrm{avpool}(x^{(0)})" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ", and then, through rounding, its integer-valued version " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "x^{(1)} = \\mathrm{round}(y^{(1)})" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ". The process is repeated, and eventually we have four integer versions of the image " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\{x^{(0)}, x^{(1)}, x^{(2)}, x^{(3)}\\}" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ", together with three non-integer versions " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\{y^{(1)}, y^{(2)}, y^{(3)}\\}" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ". In the context of lossless coding, the lowest resolution version, " + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "inline_equation", + "content": "x^{(3)}" + }, + { + "bbox": [ + 130, + 545, + 482, + 666 + ], + "type": "text", + "content": ", must be sent in clear together with the rounding bits at levels 3, 2, and 1, but we mention this only for completeness and for a more compelling interpretation of results. The CNNs trained on real images provide the predicted probability distribution" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 116, + 481, + 222 + ], + "blocks": [ + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "lines": [ + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "spans": [ + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "image", + "image_path": "21fdcf61e015902664f93189c579d3c6e08e3d04b3288e0f29544ae1cf64a3df.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "lines": [ + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "spans": [ + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "text", + "content": "Fig. 3: Extracting decision statistics. The full resolution image " + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "inline_equation", + "content": "x^{(0)}" + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "text", + "content": " is downsampled three times. The lowest-resolution version, " + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "inline_equation", + "content": "x^{(3)}" + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "text", + "content": ", feeds the level-2 CNN, which outputs the probability distributions of level-2 pixels. These distributions, together with the actual level-2 pixels, are used to compute the level-2 coding cost " + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "inline_equation", + "content": "\\mathrm{NLL}^{(2)}" + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "text", + "content": " and its expected value " + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "inline_equation", + "content": "H^{(2)}" + }, + { + "bbox": [ + 130, + 232, + 482, + 300 + ], + "type": "text", + "content": ". All these steps are then repeated for levels 1 and 0. Eventually, NLLs and entropies are combined to compute the decision statistics." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 307, + 282, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 307, + 282, + 319 + ], + "spans": [ + { + "bbox": [ + 130, + 307, + 282, + 319 + ], + "type": "text", + "content": "for all pixels" + }, + { + "bbox": [ + 130, + 307, + 282, + 319 + ], + "type": "inline_equation", + "content": "^3" + }, + { + "bbox": [ + 130, + 307, + 282, + 319 + ], + "type": "text", + "content": " of levels 0, 1, and 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 269, + 325, + 481, + 343 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 269, + 325, + 481, + 343 + ], + "spans": [ + { + "bbox": [ + 269, + 325, + 481, + 343 + ], + "type": "interline_equation", + "content": "P \\left(x _ {i, j} ^ {(l)} = k \\mid X _ {i, j} ^ {(l)}\\right) \\tag {7}", + "image_path": "e6272608514e4adde00060ee3ba1d3cf3a9597ce8697a75ded8e0a00edd5ee4b.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "spans": [ + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "inline_equation", + "content": "k \\in \\{0, \\dots, 255\\}" + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "inline_equation", + "content": "X_{i,j}^{(l)}" + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "text", + "content": " is the context for pixel " + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "inline_equation", + "content": "x_{i,j}^{(l)}" + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "text", + "content": ", including a portion of the lower-resolution image " + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "inline_equation", + "content": "y^{(l+1)}" + }, + { + "bbox": [ + 130, + 349, + 482, + 402 + ], + "type": "text", + "content": " and possibly some same-resolution neighbors of the current pixel. Given the above distribution, we compute the negative log likelihood and the entropy at each pixel" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 220, + 407, + 480, + 452 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 220, + 407, + 480, + 452 + ], + "spans": [ + { + "bbox": [ + 220, + 407, + 480, + 452 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathrm {N L L} _ {i, j} ^ {(l)} = - \\log P (x _ {i, j} ^ {(l)} | X _ {i, j} ^ {(l)}) \\\\ H _ {i, j} ^ {(l)} = - \\sum_ {k} P (k | X _ {i, j} ^ {(l)}) \\log P (k | X _ {i, j} ^ {(l)}) \\tag {8} \\\\ \\end{array}", + "image_path": "7384389b6ff01d42e76498c328976b03e095a8c664f66df5597a4efe10843c1a.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "spans": [ + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "text", + "content": "These quantities are shown in Fig.2 for two sample images, real and synthetic. Then, through spatial averaging, we obtain the corresponding quantities for the images at all resolution levels " + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "inline_equation", + "content": "\\mathrm{NLL}^{(l)} = \\langle \\mathrm{NLL}_{i,j}^{(l)}\\rangle" + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "inline_equation", + "content": "H^{(l)} = \\langle H_{i,j}^{(l)}\\rangle" + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "text", + "content": ", for " + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "inline_equation", + "content": "l = 0,1,2" + }, + { + "bbox": [ + 130, + 457, + 482, + 518 + ], + "type": "text", + "content": ". These are the features associated by the system to input image and our decision statistics will be suitable combinations of them." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 519, + 482, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 519, + 482, + 639 + ], + "spans": [ + { + "bbox": [ + 130, + 519, + 482, + 639 + ], + "type": "text", + "content": "Before going on, it is convenient to give a physical interpretation of these quantities. Each NLL can be interpreted as the actual coding cost for the corresponding image. While each entropy can be interpreted as the expected value of the coding cost given the context, when the image is coherent with the predicted distribution. In the presence of a mismatch, " + }, + { + "bbox": [ + 130, + 519, + 482, + 639 + ], + "type": "inline_equation", + "content": "\\mathrm{NLL} - H > 0" + }, + { + "bbox": [ + 130, + 519, + 482, + 639 + ], + "type": "text", + "content": ", on the average, with a gap that increases with increasing distribution mismatch. Our fundamental assumption is that the trained CNNs provide a good model of real images, and synthetic images tend not to follow the same model. Therefore, we expect that synthetic images are characterized by higher coding cost, hence higher NLL, under this distribution. This observation would lead us to use the NLLs as decision" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 139, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 133, + 642, + 482, + 666 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 642, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 133, + 642, + 482, + 666 + ], + "type": "text", + "content": "3 More precisely, all color components of all pixels, but to simplify notations, in the following we will neglect color and treat the image as if grayscale." + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "content": "statistics. However, the coding cost does not depend only on the distribution mismatch but also (predominantly) on the intrinsic information content of the image, measured by the entropy. A complex image, say a photo of a crowd, is more difficult to encode/describe than a smooth image, say a blue sky, no matter what model we use. Therefore, to get rid of this bias, we consider the coding cost gap, defined as the difference " + }, + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "inline_equation", + "content": "D^{(l)} = \\mathrm{NLL}^{(l)} - H^{(l)}" + }, + { + "bbox": [ + 130, + 116, + 482, + 236 + ], + "type": "text", + "content": ", as decision statistic. Hence, for each image, we have three basic decision statistics, one for each resolution level. It is worth observing that some forms of normalization are adopted for machine generated text detection as well [29, 49, 70]. A block diagram of our method is shown in Fig.3." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "spans": [ + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "text", + "content": "A sample graph of the coding cost gap is shown in Fig.1, on the right. For real images and three families of synthetic images we report the average gap (solid line) plus/minus its standard deviation (colored band) for the various resolutions levels. Two important observations can be made. First of all, the level-0 coding cost gap, concerning the full resolution image, seems to be much more discriminant than the others. Moreover, the gap grows much faster for synthetic images than for real images when going from level 1 to level 0. Therefore, as decision statistics we will consider both " + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "text", + "content": " (the level-0 coding cost gap) and " + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "inline_equation", + "content": "\\Delta^{01} = D^{(0)} - D^{(1)}" + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "text", + "content": " (its slope). In addition, in preliminary experiments we observed that synthetic images are sometimes characterized by a coding cost much lower rather than much higher than expected, that is the NLL is much lower than the entropy. This is also an anomaly, which signals the likely synthetic nature of the image. Therefore, besides the above statistics we also consider their absolute values " + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "inline_equation", + "content": "|D^{(0)}|" + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "inline_equation", + "content": "|\\Delta^{(01)}|" + }, + { + "bbox": [ + 130, + 236, + 482, + 416 + ], + "type": "text", + "content": ". These observations are supported by the sample graphical analysis shown in Fig.5 in the ablation study." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 432, + 198, + 444 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 432, + 198, + 444 + ], + "spans": [ + { + "bbox": [ + 132, + 432, + 198, + 444 + ], + "type": "text", + "content": "4 Results" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 456, + 269, + 467 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 456, + 269, + 467 + ], + "spans": [ + { + "bbox": [ + 132, + 456, + 269, + 467 + ], + "type": "text", + "content": "4.1 Datasets and Metrics" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "spans": [ + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "text", + "content": "We benchmarked our model on a large variety of synthetic generators both GANs and DMs: GauGAN [53], BigGAN [5], StarGAN [8], StyleGAN2 [38], DiffusionGAN [76], GigaGAN [35], GALIP [73], DDPM [32], ADM [16], GLIDE [50], Stable Diffusion [62, 63], DiT [54], DeepFloyd-IF [39], Stable Diffusion XL [55], DALL-E [14], DALL-E 2 [57], DALL-E 3 [52], Midjourney V5 [48], and Adobe Firefly [23]. We collected images from publicly available datasets [3,10,51,74] and generated additional images as needed when they were not publicly available. We ensured that all datasets included pristine and synthetic images with similar semantic content, both compressed and uncompressed, to avoid any kind of bias (see Fig.4). For some synthetic generators we have multiple datasets, built on the basis of different real image datasets LSUN [77], FFHQ [37], ImageNet [15], COCO [42], LAION [66] and RAISE [13]. This is a fortunate circumstance: we kept them carefully separate as this allows us to analyze how the performance of a detector depends on the class of real images used in the synthesis phase. Overall we used a total of " + }, + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "inline_equation", + "content": "29\\mathrm{k}" + }, + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "text", + "content": " synthetic images and " + }, + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "inline_equation", + "content": "6\\mathrm{k}" + }, + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "text", + "content": " real images. More details on the generated and actual images are provided in the supplementary material." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 167, + 124, + 234, + 192 + ], + "blocks": [ + { + "bbox": [ + 192, + 117, + 209, + 123 + ], + "lines": [ + { + "bbox": [ + 192, + 117, + 209, + 123 + ], + "spans": [ + { + "bbox": [ + 192, + 117, + 209, + 123 + ], + "type": "text", + "content": "LSUN" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 167, + 124, + 234, + 192 + ], + "lines": [ + { + "bbox": [ + 167, + 124, + 234, + 192 + ], + "spans": [ + { + "bbox": [ + 167, + 124, + 234, + 192 + ], + "type": "image", + "image_path": "231219b8aa647713db5823eb166fc61ec2b1b695db14bba71ce64e96e2058439.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 238, + 124, + 306, + 192 + ], + "blocks": [ + { + "bbox": [ + 263, + 117, + 280, + 123 + ], + "lines": [ + { + "bbox": [ + 263, + 117, + 280, + 123 + ], + "spans": [ + { + "bbox": [ + 263, + 117, + 280, + 123 + ], + "type": "text", + "content": "FFHQ" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 238, + 124, + 306, + 192 + ], + "lines": [ + { + "bbox": [ + 238, + 124, + 306, + 192 + ], + "spans": [ + { + "bbox": [ + 238, + 124, + 306, + 192 + ], + "type": "image", + "image_path": "bb239cbf94a53ccb942aec78d1e3ee4b36954e1eb7f2c502e43523006d518b25.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 309, + 124, + 376, + 192 + ], + "blocks": [ + { + "bbox": [ + 328, + 117, + 357, + 124 + ], + "lines": [ + { + "bbox": [ + 328, + 117, + 357, + 124 + ], + "spans": [ + { + "bbox": [ + 328, + 117, + 357, + 124 + ], + "type": "text", + "content": "ImageNet" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 309, + 124, + 376, + 192 + ], + "lines": [ + { + "bbox": [ + 309, + 124, + 376, + 192 + ], + "spans": [ + { + "bbox": [ + 309, + 124, + 376, + 192 + ], + "type": "image", + "image_path": "4a115c6e880666decef07509d79bc9ba88b0390e613c14fe5dc880a36a060486.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 378, + 124, + 446, + 192 + ], + "blocks": [ + { + "bbox": [ + 403, + 117, + 421, + 123 + ], + "lines": [ + { + "bbox": [ + 403, + 117, + 421, + 123 + ], + "spans": [ + { + "bbox": [ + 403, + 117, + 421, + 123 + ], + "type": "text", + "content": "COCO" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 378, + 124, + 446, + 192 + ], + "lines": [ + { + "bbox": [ + 378, + 124, + 446, + 192 + ], + "spans": [ + { + "bbox": [ + 378, + 124, + 446, + 192 + ], + "type": "image", + "image_path": "b7bfe6362a66142dbd2a2f70ca76f862faa3a7c5bee546a67beb53a1ed0ef7d0.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 167, + 194, + 234, + 261 + ], + "blocks": [ + { + "bbox": [ + 167, + 194, + 234, + 261 + ], + "lines": [ + { + "bbox": [ + 167, + 194, + 234, + 261 + ], + "spans": [ + { + "bbox": [ + 167, + 194, + 234, + 261 + ], + "type": "image", + "image_path": "9e37f264c6fba4e67044aa0bce9e3ee7cc4a3416751b1ccf134f277112e9f7a6.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 180, + 262, + 221, + 269 + ], + "lines": [ + { + "bbox": [ + 180, + 262, + 221, + 269 + ], + "spans": [ + { + "bbox": [ + 180, + 262, + 221, + 269 + ], + "type": "text", + "content": "Diffusion-GAN" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 238, + 194, + 306, + 261 + ], + "blocks": [ + { + "bbox": [ + 238, + 194, + 306, + 261 + ], + "lines": [ + { + "bbox": [ + 238, + 194, + 306, + 261 + ], + "spans": [ + { + "bbox": [ + 238, + 194, + 306, + 261 + ], + "type": "image", + "image_path": "943fc5ca86f34081a895e86e759e04053730a83344509087c558dbc13ff9aad0.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 256, + 262, + 287, + 269 + ], + "lines": [ + { + "bbox": [ + 256, + 262, + 287, + 269 + ], + "spans": [ + { + "bbox": [ + 256, + 262, + 287, + 269 + ], + "type": "text", + "content": "StyleGAN2" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 308, + 194, + 376, + 261 + ], + "blocks": [ + { + "bbox": [ + 308, + 194, + 376, + 261 + ], + "lines": [ + { + "bbox": [ + 308, + 194, + 376, + 261 + ], + "spans": [ + { + "bbox": [ + 308, + 194, + 376, + 261 + ], + "type": "image", + "image_path": "83d9e8f5b5175e4b859f3d3a120ac90a5d575b32bbc9550e813c73b4bd92c395.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 337, + 262, + 348, + 269 + ], + "lines": [ + { + "bbox": [ + 337, + 262, + 348, + 269 + ], + "spans": [ + { + "bbox": [ + 337, + 262, + 348, + 269 + ], + "type": "text", + "content": "DiT" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 130, + 280, + 480, + 313 + ], + "lines": [ + { + "bbox": [ + 130, + 280, + 480, + 313 + ], + "spans": [ + { + "bbox": [ + 130, + 280, + 480, + 313 + ], + "type": "text", + "content": "Fig. 4: Examples of real and AI-generated images of different categories used in our experiments. Top: real images from LSUN, FFHQ, ImageNET and COCO. Bottom: generated images from DiffusionGAN, StyleGAN2, DiT and SDXL." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 378, + 194, + 446, + 261 + ], + "blocks": [ + { + "bbox": [ + 378, + 194, + 446, + 261 + ], + "lines": [ + { + "bbox": [ + 378, + 194, + 446, + 261 + ], + "spans": [ + { + "bbox": [ + 378, + 194, + 446, + 261 + ], + "type": "image", + "image_path": "11001dc1a4c83fd4e760e098c0c48c0af1508c6ba61af9b46c97fd33aba88f7f.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 405, + 262, + 419, + 269 + ], + "lines": [ + { + "bbox": [ + 405, + 262, + 419, + 269 + ], + "spans": [ + { + "bbox": [ + 405, + 262, + 419, + 269 + ], + "type": "text", + "content": "SDXL" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 338, + 480, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 338, + 480, + 374 + ], + "spans": [ + { + "bbox": [ + 130, + 338, + 480, + 374 + ], + "type": "text", + "content": "Following other papers [11, 43, 51] we measure performance using the area under the ROC curve (AUC) and the balanced accuracy. We also show the influence of the threshold selection on the performance." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 131, + 392, + 237, + 404 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 392, + 237, + 404 + ], + "spans": [ + { + "bbox": [ + 131, + 392, + 237, + 404 + ], + "type": "text", + "content": "4.2 Ablation Study" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": "Features analysis. First, we want to provide a better insight into the role and importance of the features described in Section 3.4: " + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": " (the 0-level coding cost gap), its slope " + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "inline_equation", + "content": "\\varDelta^{01} = D^{(0)} - D^{(1)}" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": " and their absolute values. To this end, we consider the set of real and synthetic (DALL-E 2, GLIDE, Midjourney, SDXL) images of the Synthbuster dataset [3]. We note, in passing, that this dataset includes only uncompressed images, which dispels any possible doubt that our method exploits some JPEG compression bias between real and fake images [28]. Some selected scatter plots and graphs are shown in Fig.5. The rightmost box shows that encoding cost (NLL) and entropy (" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": ") alone are not very discriminating, even if computed at the more informative level 0 (high resolution). In contrast, their difference, the 0-level coding cost gap " + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": ", seems to separate the different classes quite well (central box), in particular the real class (violet) from the others. Note that the level-1 gap (not shown) is not equally discriminating, and the level-2 gap, plotted on the " + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": " axis, turns out to be essentially useless. In the third box we plot the empirical distributions of " + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 414, + 482, + 665 + ], + "type": "text", + "content": " for the various classes. This representation makes the good separability of the classes further clear but also highlights an unexpected phenomenon: GLIDE images group mostly to the left of the real class, that is, they have a lower-than-expected coding cost. Although not in line with our initial hypotheses, this fact nevertheless represents an anomaly, which can be detected by thresholding the absolute value of the statistic rather than the statistic itself." + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 143, + 114, + 470, + 233 + ], + "blocks": [ + { + "bbox": [ + 143, + 114, + 470, + 233 + ], + "lines": [ + { + "bbox": [ + 143, + 114, + 470, + 233 + ], + "spans": [ + { + "bbox": [ + 143, + 114, + 470, + 233 + ], + "type": "image", + "image_path": "e0d1f90588a1d2d1fe7366bc64d08cf8c2465ccdafa765b49781168c5e54eaaf.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "lines": [ + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "spans": [ + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "text", + "content": "Fig. 5: Decision statistics. NLL and entropy by themselves are not discriminant (left). Their difference (center) is much more useful for detection, but only at high resolution, " + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "text", + "content": ", while " + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "inline_equation", + "content": "D^{(1)}" + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "text", + "content": " is less discriminant and " + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "inline_equation", + "content": "D^{(2)}" + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "text", + "content": " basically useless. Right box shows histograms of " + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "text", + "content": " for real and synthetic images. Note that for GLIDE, " + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "inline_equation", + "content": "D^{(0)}" + }, + { + "bbox": [ + 130, + 243, + 480, + 299 + ], + "type": "text", + "content": " is negative, on the average. Good discrimination is still possible based on the absolute value." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 175, + 316, + 455, + 479 + ], + "blocks": [ + { + "bbox": [ + 175, + 316, + 455, + 479 + ], + "lines": [ + { + "bbox": [ + 175, + 316, + 455, + 479 + ], + "spans": [ + { + "bbox": [ + 175, + 316, + 455, + 479 + ], + "type": "image", + "image_path": "9decf8c6e1180ab5e73dda0f803d59989cc177f1486b60612b4544c92cec3c53.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 487, + 482, + 536 + ], + "lines": [ + { + "bbox": [ + 130, + 487, + 482, + 536 + ], + "spans": [ + { + "bbox": [ + 130, + 487, + 482, + 536 + ], + "type": "text", + "content": "Fig. 6: AUC of proposed method as a function of decision statistic (see Section 3.4) and dataset of real images used to train the lossless encoder: Open Images, LAION, COCO, and their augmented versions " + }, + { + "bbox": [ + 130, + 487, + 482, + 536 + ], + "type": "inline_equation", + "content": "(^{*})" + }, + { + "bbox": [ + 130, + 487, + 482, + 536 + ], + "type": "text", + "content": ". Synthetic test images are selected to match the corresponding real test images: ImageNet (top), and LAION (bottom)." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "text", + "content": "Influence of the real class. To better understand the role of the real dataset used to train the lossless encoder, we perform an experiment in which we vary it. Along with the original encoder pre-trained on the Open Images dataset [40] (about 338k high-resolution images), we consider two other versions, trained from scratch on the LAION dataset [66] (" + }, + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\simeq 117\\mathrm{k}" + }, + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "text", + "content": "), and the COCO dataset [42] (" + }, + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\simeq 106\\mathrm{k}" + }, + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "text", + "content": "), respectively, using the same hyperparameters as [6]. Additionally, we consider versions (marked with *) trained on the same datasets, augmented with JPEG compressed images with quality between 80 and 100. We compute the performance in terms of AUC on two different datasets of synthetic and" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 157, + 480, + 289 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 149 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 149 + ], + "type": "text", + "content": "Table 1: Reference methods. For each one we indicate the key idea, the datasets of real and synthetic images used for training with their sizes, whether or not augmentation is used, the test strategy." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 157, + 480, + 289 + ], + "lines": [ + { + "bbox": [ + 133, + 157, + 480, + 289 + ], + "spans": [ + { + "bbox": [ + 133, + 157, + 480, + 289 + ], + "type": "table", + "html": "
Acronym [ref]Idea/ApproachTraining Real/FakeSize(K)Augment.Test Strategy
Wang2020 [74]High diversityLSUN/ProGAN360/360global pooling
PatchFor. [7]Patch-basedCelebA,FF/various84/272resizing
Liu2022 [43]Noise-basedLSUN/ProGAN360/360global pooling
Corvi2023 [10]No-downsamplingCOCO,LSUN/Latent180/180global pooling
LGrad [72]Gradient-basedLSUN/ProGAN72/72resizing
DIRE [75]InversionLSUN-Bed/ADM40/40resizing
DE-FAKE [67]Prompt-basedLSUN/Stable Diff.20/20resizing
Ojha2023 [51]CLIPLSUN/ProGAN360/360cropping
NPR [71]ResidualLSUN/ProGAN72/72resizing
AEROBLADE [60]AE rec. error- / -- / -global distance
", + "image_path": "b4ce44c507eaed16458689a225ce5cf10053c9720f21f8b78f2c58f1cb6c23ec.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 312, + 482, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 312, + 482, + 387 + ], + "spans": [ + { + "bbox": [ + 130, + 312, + 482, + 387 + ], + "type": "text", + "content": "real images, where this latter class comes from ImageNet [15] (Fig.6, top) or LAION [66] (Fig.6, bottom). We can observe that the best and more uniform results across the four decision statistics are obtained using " + }, + { + "bbox": [ + 130, + 312, + 482, + 387 + ], + "type": "inline_equation", + "content": "\\mathrm{COCO}^*" + }, + { + "bbox": [ + 130, + 312, + 482, + 387 + ], + "type": "text", + "content": ", while training on Open Images guarantees good performance if the real class is LAION, but bad performance if it is ImageNet. Additional results are included in the supplementary material." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 403, + 254, + 416 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 403, + 254, + 416 + ], + "spans": [ + { + "bbox": [ + 131, + 403, + 254, + 416 + ], + "type": "text", + "content": "4.3 SoTA Comparison" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 426, + 482, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 426, + 482, + 605 + ], + "spans": [ + { + "bbox": [ + 130, + 426, + 482, + 605 + ], + "type": "text", + "content": "In our analysis we include only methods with code and/or pre-trained models publicly available on-line. Eventually, we included 7 CNN-based methods [7,10, 43, 71, 72, 74, 75], 2 CLIP-based methods [51, 67] and a training-free method [60]. A brief summary of these techniques is provided in Tab.1, while a more detailed description is given in the supplementary material. For a fair comparison we avoid testing on ProGAN [36] and Latent Diffusion [61], because a good number of these supervised methods were trained on datasets that include images from these generators. Even so, we have a total of 30 datasets for testing. Results are reported in Tab.2 in terms of AUC, with the best figure for each dataset highlighted in bold. Note that each row is characterized by the name of the generator (e.g., GauGAN) and by a single letter that recalls the set of real images used to train it: S for LSUN, F for FFHQ, I for ImageNet, C for COCO, L for LAION, R for RAISE. This detail allows us to study how the performance depends on the real dataset (but with synthetic images from the same generator and with semantic content aligned with real images)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": "First of all, we observe that for most reference methods the average AUC does not exceed " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": ". Notable exceptions are the CLIP-based Ojha2023 (88.4%) and the CNN-based Corvi2023 (89.4%). Interestingly, some methods show very different performance when the real class changes. This may be due to JPEG bias as already suggested in [28, 60]. A deeper analysis on this point is presented" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 149, + 146, + 465, + 437 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "type": "text", + "content": "Table 2: AUC for reference and proposed methods. Best score in bold with a " + }, + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "type": "inline_equation", + "content": "0.5\\%" + }, + { + "bbox": [ + 132, + 114, + 480, + 137 + ], + "type": "text", + "content": " margin. S = LSUN, F = FFHQ, I = ImageNet, C = COCO, L = LAION, R = RAISE." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 149, + 146, + 465, + 437 + ], + "lines": [ + { + "bbox": [ + 149, + 146, + 465, + 437 + ], + "spans": [ + { + "bbox": [ + 149, + 146, + 465, + 437 + ], + "type": "table", + "html": "
Real dataWang2020PatchFor.Lin2022Corvi2023LGradDIREDEFAKEOjha2023NPRAEROBLADEOurs \\( {D}^{\\left( 0\\right) } \\)Ours \\( {\\left| D\\right| }^{\\left( 0\\right) } \\)\\( {\\Delta }^{u1} \\)\\( {\\Delta }^{u1} \\)
C98.980.899.783.881.699.943.8100.89.155.199.899.899.999.999.799.799.799.799.799.799.799.799.7
GauGANC92.785.594.783.477.299.859.059.099.686.851.992.388.695.992.388.695.992.692.692.699.799.799.7
BigGANI94.7100.99.995.973.940.445.999.781.584.0100.100.100.100.100.100.100.100.100.100.100.100.100.
StarGANF98.183.899.789.199.858.339.196.7100.30.096.696.196.796.796.796.796.796.796.596.596.596.5
StyleGAN2S94.985.199.958.482.755.547.691.071.360.143.187.741.188.787.741.188.787.787.787.787.787.7
F
GigaGANI73.761.097.350.576.499.964.394.682.447.572.468.172.468.172.468.172.468.168.168.168.168.1
C79.584.099.690.976.799.987.997.695.580.696.594.094.096.797.396.797.396.797.396.797.396.7
Diff.GANS89.892.699.596.699.549.844.897.4100.43.999.499.499.499.499.499.499.499.599.599.599.599.5
GALIPC89.798.294.387.756.7100.75.698.690.765.098.496.399.799.799.799.799.799.799.799.799.799.7
DALL-EL66.471.795.098.395.299.855.997.399.524.199.295.898.298.298.298.298.298.298.298.298.298.2
DDPMF31.698.422.8100.9.823.150.577.792.481.776.625.293.879.676.625.293.879.679.679.679.679.6
ADMS67.667.670.680.381.152.037.488.294.153.149.553.569.463.159.563.169.463.169.463.171.071.0
I61.081.994.481.172.799.569.185.378.580.387.890.595.395.395.395.395.395.395.392.192.192.1
GLIDEC64.897.496.397.281.599.992.488.895.498.047.888.588.588.588.588.588.588.588.588.588.588.5
R32.295.056.686.550.642.992.272.863.387.723.289.451.165.165.165.165.165.165.165.165.165.1
L72.674.190.886.990.3100.60.295.399.868.754.584.284.284.284.284.284.284.284.284.284.284.2
DiTI58.683.188.0100.56.299.687.477.878.499.889.484.384.384.384.384.384.384.384.384.384.384.3
Stable D. 1.4C68.286.195.3100.54.799.993.397.976.599.848.474.854.674.854.654.654.654.654.654.671.471.4
R37.961.873.4100.50.037.688.087.743.096.999.499.498.798.797.097.097.097.097.097.097.297.2
Stable D. 2C56.578.694.2100.62.899.397.982.389.399.983.090.384.584.584.584.584.584.584.584.584.584.5
R50.238.734.8100.41.435.580.789.544.097.498.596.895.895.895.895.895.895.895.895.895.895.8
SDXLC83.860.889.3100.89.399.594.080.099.387.999.999.999.999.999.999.999.999.999.999.999.999.9
R54.368.431.1100.57.247.184.485.176.769.7100.100.100.100.100.100.99.199.299.299.299.299.2
Deep.-IFC78.062.772.299.968.898.996.992.991.681.991.782.388.488.488.488.488.488.488.488.479.479.4
DALL-E 2C88.552.498.988.278.699.980.697.190.059.3100.100.100.100.100.100.100.100.100.99.999.9
R64.841.970.469.458.644.770.995.239.532.8100.100.100.100.100.100.100.100.100.100.100.
DALL-E 3C65.047.399.5100.88.499.996.286.497.799.799.799.799.598.398.398.398.398.398.398.2
R10.952.70.260.837.947.692.436.448.748.379.166.778.078.178.178.178.178.178.178.1
MidjourneyR40.257.840.7100.56.351.078.166.277.099.099.799.398.598.598.598.598.598.598.598.5
Adobe FireflyR84.849.411.898.040.657.481.497.532.152.873.641.280.880.4
AVG68.373.377.089.468.274.672.988.480.171.283.386.488.888.890.0
", + "image_path": "9c3960f79f7c3e54fec294eb9aeed9f165318827e6434af26452d9e3ed072df7.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "spans": [ + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": "in the supplementary material. The proposed zero-shot approach goes above " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " with all decision statistics, reaching the top value of " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "90.0\\%" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " when " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "|\\varDelta^{01}|" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " is used. Obviously, this is a very good result, but what makes it especially valuable is the absence of any dependence on the generators' models. This point is further stressed by the fact that the AUC remains extremely stable across all test sets, with a minimum of " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "65.1\\%" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " on GLIDE-R. On the contrary, the best competitor, Corvi2023, has a long score of top results but also some very poor ones. suggesting a certain instability, likely due to the presence/absence of specific artifacts in the test images, and eventually the risk of not adapting to models of new conception. We want also to draw the reader's attention on the already mentioned case of GLIDE and on the fact that the proposed method exhibits wildly different results with different decision statistics. In particular, with " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "|D^{(0)}|" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " the AUC is " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "89.4\\%" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " as opposed to the already mentioned " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "65.1\\%" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "|\\varDelta^{01}|" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": ". This suggests there may be better ways to exploit the basic " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "\\mathrm{NLL}^{(l)}" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "inline_equation", + "content": "H^{(l)}" + }, + { + "bbox": [ + 133, + 461, + 481, + 640 + ], + "type": "text", + "content": ", possibly jointly at all levels, to synthesize a better and more stable decision statistics." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 641, + 480, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 641, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 641, + 480, + 665 + ], + "type": "text", + "content": "Finally, in Fig.7, we report the accuracy as a function of the decision threshold for the best methods. A separate curve is shown for each real image dataset by" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 136, + 114, + 220, + 179 + ], + "blocks": [ + { + "bbox": [ + 136, + 114, + 220, + 179 + ], + "lines": [ + { + "bbox": [ + 136, + 114, + 220, + 179 + ], + "spans": [ + { + "bbox": [ + 136, + 114, + 220, + 179 + ], + "type": "image", + "image_path": "4e21d0031c615837cb8a5a64ab07a2ce4aa27497ada7559bbcb459410c5ad7c3.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 203, + 482, + 259 + ], + "lines": [ + { + "bbox": [ + 130, + 203, + 482, + 259 + ], + "spans": [ + { + "bbox": [ + 130, + 203, + 482, + 259 + ], + "type": "text", + "content": "Fig. 7: Balanced accuracy as a function of the detection threshold. For each dataset of real images, we average accuracy over all associated synthetic generators. The dotted vertical line indicates the global optimal threshold and the " + }, + { + "bbox": [ + 130, + 203, + 482, + 259 + ], + "type": "inline_equation", + "content": "\\times" + }, + { + "bbox": [ + 130, + 203, + 482, + 259 + ], + "type": "text", + "content": " symbol the corresponding accuracy. Note that only for the proposed method all peaks are very close, indicating the presence of a single threshold. Charts for other methods are reported in the Suppl." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 223, + 114, + 306, + 179 + ], + "blocks": [ + { + "bbox": [ + 223, + 114, + 306, + 179 + ], + "lines": [ + { + "bbox": [ + 223, + 114, + 306, + 179 + ], + "spans": [ + { + "bbox": [ + 223, + 114, + 306, + 179 + ], + "type": "image", + "image_path": "0e71efdcbd0b70bafef580e3faab45897e7b5dda058041bfe72d36967d5b3a51.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 309, + 114, + 392, + 179 + ], + "blocks": [ + { + "bbox": [ + 309, + 114, + 392, + 179 + ], + "lines": [ + { + "bbox": [ + 309, + 114, + 392, + 179 + ], + "spans": [ + { + "bbox": [ + 309, + 114, + 392, + 179 + ], + "type": "image", + "image_path": "4ee059c975ba3f9c60be3ad22d7d0186c303ed6ae9a73cf9cec89ad584c1662e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 394, + 114, + 479, + 179 + ], + "blocks": [ + { + "bbox": [ + 394, + 114, + 479, + 179 + ], + "lines": [ + { + "bbox": [ + 394, + 114, + 479, + 179 + ], + "spans": [ + { + "bbox": [ + 394, + 114, + 479, + 179 + ], + "type": "image", + "image_path": "c16b149cb21ba2a87c32a0e78158ea1361ad650288d9756676f79ea09e97e8e5.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 186, + 182, + 427, + 194 + ], + "blocks": [ + { + "bbox": [ + 186, + 182, + 427, + 194 + ], + "lines": [ + { + "bbox": [ + 186, + 182, + 427, + 194 + ], + "spans": [ + { + "bbox": [ + 186, + 182, + 427, + 194 + ], + "type": "image", + "image_path": "8b60c82f1664d71b2d47eca0399985d53656884a7ba43874bcc073cab300070c.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 281, + 482, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 281, + 482, + 329 + ], + "spans": [ + { + "bbox": [ + 130, + 281, + 482, + 329 + ], + "type": "text", + "content": "averaging over the associated synthetic generators. Unlike AUC, the accuracy critically depends on the selection of a good threshold and some calibration data may be needed for this purpose. Note that only for the proposed method there is a single good threshold that ensures near-optimal accuracy for all datasets." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 131, + 344, + 218, + 356 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 344, + 218, + 356 + ], + "spans": [ + { + "bbox": [ + 131, + 344, + 218, + 356 + ], + "type": "text", + "content": "4.4 Limitations" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 361, + 482, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 361, + 482, + 447 + ], + "spans": [ + { + "bbox": [ + 130, + 361, + 482, + 447 + ], + "type": "text", + "content": "Our work was developed to detect whether an image has been fully generated and not to detect local manipulations. However, it could be easily extended to accomplish this task since we already compute a map of local pixel-wise statistics. Furthermore, our approach relies on a model of the real class learned by the encoder. If real images do not satisfy this model, the approach may not perform correctly. For example, if images are highly compressed or resized (as is the case on the web), statistical analysis may not be reliable." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 131, + 463, + 220, + 475 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 463, + 220, + 475 + ], + "spans": [ + { + "bbox": [ + 131, + 463, + 220, + 475 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 486, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 486, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 486, + 482, + 666 + ], + "type": "text", + "content": "We introduced a novel zero-shot forensic detector to distinguish AI-generated images from real ones. Unlike most current methods, our approach does not require fake images during training, which ensures generalization to yet unknown generative models. The idea is to exploit an implicit model of real images and classify off-model images as synthetic. To this end, we leverage an appropriate lossless encoder, trained only on real images, that can predict the probability distribution of each pixel given its context. Synthetic images are expected to not respect this distribution, thus revealing their artificial nature. Our experiments show that the proposed detector is consistently competitive with detectors trained in supervised modality, and outperforms them in terms of generalization ability. We believe that our approach is an important stepping stone towards effective forensic tools that can operate without relying on domain- or method-specific training data. Future work will focus on making the method robust to the most common forms of image impairment, so as to make it suitable for in the wild application." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 231, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 260 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 260 + ], + "type": "text", + "content": "Acknowledgments. We gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Gift. This material is also based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. In addition, this work has received funding by the European Union under the Horizon Europe vera.ai project, Grant Agreement number 101070093." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 280, + 197, + 293 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 280, + 197, + 293 + ], + "spans": [ + { + "bbox": [ + 132, + 280, + 197, + 293 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 134, + 308, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 138, + 308, + 480, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 308, + 480, + 330 + ], + "spans": [ + { + "bbox": [ + 138, + 308, + 480, + 330 + ], + "type": "text", + "content": "1. Albright, M., McCloskey, S.: Source Generator Attribution via Inversion. In: CVPR Workshop. pp. 96-103 (2019)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 331, + 480, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 331, + 480, + 364 + ], + "spans": [ + { + "bbox": [ + 138, + 331, + 480, + 364 + ], + "type": "text", + "content": "2. Amoroso, R., Morelli, D., Cornia, M., Baraldi, L., Del Bimbo, A., Cucchiara, R.: Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images. ACM Trans. Multimedia Comput. Commun. Appl. (2024)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 365, + 480, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 365, + 480, + 386 + ], + "spans": [ + { + "bbox": [ + 138, + 365, + 480, + 386 + ], + "type": "text", + "content": "3. Bammey, Q.: Synthbuster: Towards Detection of Diffusion Model Generated Images. IEEE Open Journal of Signal Processing (2023)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 387, + 480, + 408 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 387, + 480, + 408 + ], + "spans": [ + { + "bbox": [ + 138, + 387, + 480, + 408 + ], + "type": "text", + "content": "4. Boháček, M., Farid, H.: A geometric and photometric exploration of GAN and Diffusion synthesized faces. In: CVPR Workshop. pp. 874--883 (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 409, + 480, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 409, + 480, + 430 + ], + "spans": [ + { + "bbox": [ + 138, + 409, + 480, + 430 + ], + "type": "text", + "content": "5. Brock, A., Donahue, J., Simonyan, K.: Large Scale GAN Training for High Fidelity Natural Image Synthesis. In: ICLR (2018)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 432, + 480, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 432, + 480, + 453 + ], + "spans": [ + { + "bbox": [ + 138, + 432, + 480, + 453 + ], + "type": "text", + "content": "6. Cao, S., Wu, C.Y., Krahenbuhl, P.: Lossless Image Compression through SuperResolution. arXiv preprint arXiv:2004.02872v1 (2020)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 454, + 480, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 454, + 480, + 475 + ], + "spans": [ + { + "bbox": [ + 138, + 454, + 480, + 475 + ], + "type": "text", + "content": "7. Chai, L., Bau, D., Lim, S.N., Isola, P.: What Makes Fake Images Detectable? Understanding Properties that Generalize. In: ECCV. pp. 103-120 (2020)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 476, + 480, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 476, + 480, + 509 + ], + "spans": [ + { + "bbox": [ + 138, + 476, + 480, + 509 + ], + "type": "text", + "content": "8. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: CVPR. pp. 8789-8797 (2018)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 510, + 480, + 542 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 510, + 480, + 542 + ], + "spans": [ + { + "bbox": [ + 138, + 510, + 480, + 542 + ], + "type": "text", + "content": "9. Corvi, R., Cozzolino, D., Poggi, G., Nagano, K., Verdoliva, L.: Intriguing properties of synthetic images: from generative adversarial networks to diffusion models. In: CVPR Workshop. pp. 973-982 (2023)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 544, + 480, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 544, + 480, + 576 + ], + "spans": [ + { + "bbox": [ + 138, + 544, + 480, + 576 + ], + "type": "text", + "content": "0. Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K., Verdoliva, L.: On the detection of synthetic images generated by diffusion models. In: ICASSP. pp. 1-5 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 577, + 480, + 609 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 577, + 480, + 609 + ], + "spans": [ + { + "bbox": [ + 138, + 577, + 480, + 609 + ], + "type": "text", + "content": "1. Cozzolino, D., Poggi, G., Corvi, R., Nießner, M., Verdoliva, L.: Raising the Bar of AI-generated Image Detection with CLIP. In: CVPR Workshop. pp. 4356-4366 (2024)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 134, + 610, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 610, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 134, + 610, + 480, + 643 + ], + "type": "text", + "content": "12. Cozzolino, D., Thies, J., Rössler, A., Riess, C., Nießner, M., Verdoliva, L.: Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510 (2018)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 134, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 134, + 643, + 480, + 665 + ], + "type": "text", + "content": "13. Dang-Nguyen, D.T., Pasquini, C., Conotter, V., Boato, G.: RAISE: A Raw Images Dataset for Digital Image Forensics. In: ACM MMSys. p. 219-224 (2015)" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 666 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 480, + 149 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 480, + 149 + ], + "type": "text", + "content": "14. Dayma, B., Patil, S., Cuenca, P., Saifullah, K., Abraham, T., Lé Khac, P., Melas, L., Ghosh, R.: DALL-E Mini (2021). https://doi.org/10.5281/zenodo.5146400, https://github.com/borisdayma/dalle-mini" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 149, + 480, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 149, + 480, + 171 + ], + "spans": [ + { + "bbox": [ + 132, + 149, + 480, + 171 + ], + "type": "text", + "content": "15. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: CVPR. pp. 248-255 (2009)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 171, + 480, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 171, + 480, + 191 + ], + "spans": [ + { + "bbox": [ + 132, + 171, + 480, + 191 + ], + "type": "text", + "content": "16. Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. NeurIPS 34, 8780-8794 (2021)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 192, + 480, + 213 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 192, + 480, + 213 + ], + "spans": [ + { + "bbox": [ + 132, + 192, + 480, + 213 + ], + "type": "text", + "content": "17. Du, M., Pentyala, S., Li, Y., Hu, X.: Towards Generalizable Deepfake Detection with Locality-Aware AutoEncoder. In: CIKM. pp. 325--334 (2020)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 213, + 480, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 213, + 480, + 246 + ], + "spans": [ + { + "bbox": [ + 132, + 213, + 480, + 246 + ], + "type": "text", + "content": "18. Durall, R., Keuper, M., Keuper, J.: Watch Your Up-Convolution: CNN Based Generative Deep Neural Networks Are Failing to Reproduce Spectral Distributions. In: CVPR. pp. 7890-7899 (2020)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 246, + 480, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 246, + 480, + 267 + ], + "spans": [ + { + "bbox": [ + 132, + 246, + 480, + 267 + ], + "type": "text", + "content": "19. Epstein, D.C., Jain, I., Wang, O., Zhang, R.: Online Detection of AI-Generated Images. In: ICCV Workshop. pp. 382-392 (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 267, + 480, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 267, + 480, + 310 + ], + "spans": [ + { + "bbox": [ + 132, + 267, + 480, + 310 + ], + "type": "text", + "content": "20. Epstein, Z., Hertzmann, A., Herman, L., Mahari, R., Frank, M.R., Groh, M., Schroeder, H., Akten, A.S.M., Fjeld, J., Farid, H., Leach, N., Pentland, A.S., Russakovsky, O.: Art and the science of generative AI: A deeper dive. arXiv preprint arXiv:2306.04141 (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 310, + 480, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 310, + 480, + 331 + ], + "spans": [ + { + "bbox": [ + 132, + 310, + 480, + 331 + ], + "type": "text", + "content": "21. Farid, H.: Lighting (in) consistency of paint by text. arXiv preprint arXiv:2207.13744 (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 332, + 480, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 332, + 480, + 353 + ], + "spans": [ + { + "bbox": [ + 132, + 332, + 480, + 353 + ], + "type": "text", + "content": "22. Farid, H.: Perspective (in) consistency of paint by text. arXiv preprint arXiv:2206.14617 (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 353, + 480, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 353, + 480, + 374 + ], + "spans": [ + { + "bbox": [ + 132, + 353, + 480, + 374 + ], + "type": "text", + "content": "23. Firefly, A.: https://www.adobe.com/sensei/generative-ai/firefly.html (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 374, + 480, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 374, + 480, + 407 + ], + "spans": [ + { + "bbox": [ + 132, + 374, + 480, + 407 + ], + "type": "text", + "content": "24. Frank, J., Eisenhofer, T., Schonherr, L., Fischer, A., Kolossa, D., Holz, T.: Leveraging Frequency Analysis for Deep Fake Image Recognition. In: ICML. pp. 3247-3258 (2020)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 407, + 480, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 407, + 480, + 440 + ], + "spans": [ + { + "bbox": [ + 132, + 407, + 480, + 440 + ], + "type": "text", + "content": "25. Gehrmann, S., Strobelt, H., Rush, A.M.: GLTR: Statistical detection and visualization of generated text. In: 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. pp. 111-116 (2019)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 440, + 480, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 440, + 480, + 472 + ], + "spans": [ + { + "bbox": [ + 132, + 440, + 480, + 472 + ], + "type": "text", + "content": "26. Ghosal, S.S., Chakraborty, S., Geiping, J., Huang, F., Manocha, D., Bedi, A.S.: Towards possibilities & impossibilities of AI-generated text detection: A survey. arXiv preprint arXiv:2310.15264 (2023)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 472, + 480, + 504 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 472, + 480, + 504 + ], + "spans": [ + { + "bbox": [ + 132, + 472, + 480, + 504 + ], + "type": "text", + "content": "27. Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdolina, L.: Are GAN generated images easy to detect? A critical analysis of the state-of-the-art. In: ICME. pp. 1-6 (2021)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 504, + 480, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 504, + 480, + 536 + ], + "spans": [ + { + "bbox": [ + 132, + 504, + 480, + 536 + ], + "type": "text", + "content": "28. Grommelt, P., Weiss, L., Pfreundt, F.J., Keuper, J.: Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets. arXiv preprint arXiv:2403.17608 (2024)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 536, + 480, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 536, + 480, + 568 + ], + "spans": [ + { + "bbox": [ + 132, + 536, + 480, + 568 + ], + "type": "text", + "content": "29. Hans, A., Schwarzschild, A., Cherepanova, V., Kazemi, H., Saha, A., Goldblum, M., Geiping, J., Goldstein, T.: Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text. In: ICML (2024)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 569, + 480, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 569, + 480, + 601 + ], + "spans": [ + { + "bbox": [ + 132, + 569, + 480, + 601 + ], + "type": "text", + "content": "30. He, Z., Chen, P.Y., Ho, T.Y.: RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection. arXiv preprint arXiv:2405.20112 (2024)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 601, + 480, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 601, + 480, + 622 + ], + "spans": [ + { + "bbox": [ + 132, + 601, + 480, + 622 + ], + "type": "text", + "content": "31. Heikkilä, M.: This artist is dominating AI-generated art. and he's not happy about it. MIT Technology Review (2022)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 622, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 622, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 132, + 622, + 480, + 643 + ], + "type": "text", + "content": "32. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. NeurIPS 33, 6840-6851 (2020)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 132, + 643, + 480, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 643, + 480, + 666 + ], + "spans": [ + { + "bbox": [ + 132, + 643, + 480, + 666 + ], + "type": "text", + "content": "33. Jeon, H., Bang, Y.O., Kim, J., Woo, S.: T-GD: Transferable GAN-generated Images Detection Framework. In: ICML. vol. 119, pp. 4746-4761 (2020)" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "type": "text", + "content": "34. Jeong, Y., Kim, D., Ro, Y., Kim, P., Choi, J.: Fingerprint Net: Synthesized Fingerprints for Generated Image Detection. In: ECCV. pp. 76-94 (2022)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 138, + 480, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 138, + 480, + 160 + ], + "spans": [ + { + "bbox": [ + 130, + 138, + 480, + 160 + ], + "type": "text", + "content": "35. Kang, M., Zhu, J.Y., Zhang, R., Park, J., Shechtman, E., Paris, S., Park, T.: Scaling up gans for text-to-image synthesis. In: CVPR. pp. 10124-10134 (2023)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 160, + 480, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 160, + 480, + 182 + ], + "spans": [ + { + "bbox": [ + 130, + 160, + 480, + 182 + ], + "type": "text", + "content": "36. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. In: ICLR (2018)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 182, + 480, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 182, + 480, + 205 + ], + "spans": [ + { + "bbox": [ + 130, + 182, + 480, + 205 + ], + "type": "text", + "content": "37. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR. pp. 4401-4410 (2019)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 205, + 480, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 205, + 480, + 226 + ], + "spans": [ + { + "bbox": [ + 130, + 205, + 480, + 226 + ], + "type": "text", + "content": "38. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: CVPR. pp. 8110-8119 (2020)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 226, + 480, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 226, + 480, + 248 + ], + "spans": [ + { + "bbox": [ + 130, + 226, + 480, + 248 + ], + "type": "text", + "content": "39. Konstantinov, M., Shonenkov, A., Bakshandaeva, D., Schuhmann, C., Ivanova, K., Klokova, N.: https://www deepfloyd.ai/deepfloyd-if (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 248, + 480, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 248, + 480, + 292 + ], + "spans": [ + { + "bbox": [ + 130, + 248, + 480, + 292 + ], + "type": "text", + "content": "40. Krasin, I., Duerig, T., Alldrin, N., Ferrari, V., Abu-El-Haija, S., Kuznetsova, A., Rom, H., Uijlings, J., Popov, S., Veit, A., et al.: OpenImages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages (2017)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 293, + 480, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 293, + 480, + 325 + ], + "spans": [ + { + "bbox": [ + 130, + 293, + 480, + 325 + ], + "type": "text", + "content": "41. Lin, L., Gupta, N., Zhang, Y., Ren, H., Liu, C.H., Ding, F., Wang, X., Li, X., Verdoliva, L., Hu, S.: Detecting multimedia generated by large ai models: A survey. arXiv preprint arXiv:2204.06125 (2024)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 325, + 480, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 325, + 480, + 357 + ], + "spans": [ + { + "bbox": [ + 130, + 325, + 480, + 357 + ], + "type": "text", + "content": "42. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV. pp. 740-755 (2014)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 357, + 480, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 357, + 480, + 379 + ], + "spans": [ + { + "bbox": [ + 130, + 357, + 480, + 379 + ], + "type": "text", + "content": "43. Liu, B., Yang, F., Bi, X., Xiao, B., Li, W., Gao, X.: Detecting generated images by real images. In: ECCV. pp. 95-110 (2022)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 380, + 480, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 380, + 480, + 412 + ], + "spans": [ + { + "bbox": [ + 130, + 380, + 480, + 412 + ], + "type": "text", + "content": "44. Liu, H., Tan, Z., Tan, C., Wei, Y., Wang, J., Zhao, Y.: Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection. In: CVPR. pp. 10770-10780 (2024)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 413, + 480, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 413, + 480, + 434 + ], + "spans": [ + { + "bbox": [ + 130, + 413, + 480, + 434 + ], + "type": "text", + "content": "45. Mahajan, S., Roth, S.: PixelPyramids: Exact Inference Models from Lossless Image Pyramids. In: ICCV. pp. 6639-6648 (2021)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 130, + 434, + 480, + 456 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 434, + 480, + 456 + ], + "spans": [ + { + "bbox": [ + 130, + 434, + 480, + 456 + ], + "type": "text", + "content": "46. Mandelli, S., Bonettini, N., Bestagini, P., Tubaro, S.: Detecting GAN-generated Images by Orthogonal Training of Multiple CNNs. In: ICIP. pp. 3091-3095 (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 456, + 480, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 456, + 480, + 479 + ], + "spans": [ + { + "bbox": [ + 130, + 456, + 480, + 479 + ], + "type": "text", + "content": "47. Marra, F., Saltori, C., Boato, G., Verdoliva, L.: Incremental learning for the detection and classification of GAN-generated images. In: WIFS. pp. 1-6 (2019)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 130, + 479, + 376, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 479, + 376, + 490 + ], + "spans": [ + { + "bbox": [ + 130, + 479, + 376, + 490 + ], + "type": "text", + "content": "48. Midjourney: https://www.midjourney.com/home (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 490, + 480, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 490, + 480, + 522 + ], + "spans": [ + { + "bbox": [ + 130, + 490, + 480, + 522 + ], + "type": "text", + "content": "49. Mitchell, E., Lee, Y., Khazatsky, A., Manning, C.D., Finn, C.: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. In: ICML. pp. 24950-24962 (2023)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 130, + 522, + 480, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 522, + 480, + 555 + ], + "spans": [ + { + "bbox": [ + 130, + 522, + 480, + 555 + ], + "type": "text", + "content": "50. Nichol, A.Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mcgrew, B., Sutskever, I., Chen, M.: GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diff. Models. In: ICML. pp. 16784-16804 (2022)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 130, + 555, + 480, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 555, + 480, + 577 + ], + "spans": [ + { + "bbox": [ + 130, + 555, + 480, + 577 + ], + "type": "text", + "content": "51. Ojha, U., Li, Y., Lee, Y.J.: Towards universal fake image detectors that generalize across generative models. In: CVPR. pp. 24480-24489 (2023)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 130, + 577, + 345, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 577, + 345, + 589 + ], + "spans": [ + { + "bbox": [ + 130, + 577, + 345, + 589 + ], + "type": "text", + "content": "52. OpenAI: https://openai.com/dall-e-3 (2023)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 130, + 589, + 480, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 589, + 480, + 610 + ], + "spans": [ + { + "bbox": [ + 130, + 589, + 480, + 610 + ], + "type": "text", + "content": "53. Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR. pp. 2337-2346 (2019)" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 130, + 610, + 480, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 610, + 480, + 632 + ], + "spans": [ + { + "bbox": [ + 130, + 610, + 480, + 632 + ], + "type": "text", + "content": "54. Peebles, W., Xie, S.: Scalable diffusion models with transformers. In: ICCV. pp. 4195-4205 (2023)" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 130, + 632, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 632, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 632, + 480, + 665 + ], + "type": "text", + "content": "55. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: SDXL: Improving latent diffusion models for high-resolution image synthesis. In: ICLR (2024)" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 481, + 666 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 132, + 116, + 481, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 481, + 149 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 481, + 149 + ], + "type": "text", + "content": "56. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML. pp. 8748-8763 (2021)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 150, + 481, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 150, + 481, + 182 + ], + "spans": [ + { + "bbox": [ + 132, + 150, + 481, + 182 + ], + "type": "text", + "content": "57. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 182, + 481, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 182, + 481, + 215 + ], + "spans": [ + { + "bbox": [ + 132, + 182, + 481, + 215 + ], + "type": "text", + "content": "58. Reed, S.E., van den Oord, A., Kalchbrenner, N., Colmenarejo, S.G., Wang, Z., Chen, Y., Belov, D., de Freitas, N.: Parallel multiscale autoregressive density estimation. In: ICML. pp. 2912-2921 (2017)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 216, + 481, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 216, + 481, + 237 + ], + "spans": [ + { + "bbox": [ + 132, + 216, + 481, + 237 + ], + "type": "text", + "content": "59. Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the detection of diffusion model deepfakes. In: VISAPP. pp. 446-457 (2024)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 237, + 481, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 237, + 481, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 237, + 481, + 270 + ], + "type": "text", + "content": "60. Ricker, J., Lukovnikov, D., Fischer, A.: AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error. In: CVPR. pp. 9130-9140 (2024)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 270, + 481, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 270, + 481, + 293 + ], + "spans": [ + { + "bbox": [ + 132, + 270, + 481, + 293 + ], + "type": "text", + "content": "61. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR. pp. 10684-10695 (2022)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 293, + 481, + 314 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 293, + 481, + 314 + ], + "spans": [ + { + "bbox": [ + 132, + 293, + 481, + 314 + ], + "type": "text", + "content": "62. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/CompVis/stable-diffusion (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 315, + 481, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 315, + 481, + 335 + ], + "spans": [ + { + "bbox": [ + 132, + 315, + 481, + 335 + ], + "type": "text", + "content": "63. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: https://github.com/Stability-AI/stablediffusion (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 336, + 481, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 336, + 481, + 369 + ], + "spans": [ + { + "bbox": [ + 132, + 336, + 481, + 369 + ], + "type": "text", + "content": "64. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M.: Faceforensics++: Learning to detect manipulated facial images. In: ICCV. pp. 1-11 (2019)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 369, + 481, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 369, + 481, + 402 + ], + "spans": [ + { + "bbox": [ + 132, + 369, + 481, + 402 + ], + "type": "text", + "content": "65. Sarkar, A., Mai, H., Mahapatra, A., Lazebnik, S., Forsyth, D.A., Bhattad, A.: Shadows Don't Lie and Lines Can't Bend! Generative Models don't know Projective Geometry... for now. In: CVPR. pp. 28140-28149 (2024)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 402, + 481, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 402, + 481, + 435 + ], + "spans": [ + { + "bbox": [ + 132, + 402, + 481, + 435 + ], + "type": "text", + "content": "66. Schuhmann, C., Kaczmarczyk, R., Komatsuzaki, A., Katta, A., Vencu, R., Beaumont, R., Jitsev, J., Coombes, T., Mullis, C.: LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In: NeurIPS (2021)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 435, + 481, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 435, + 481, + 468 + ], + "spans": [ + { + "bbox": [ + 132, + 435, + 481, + 468 + ], + "type": "text", + "content": "67. Sha, Z., Li, Z., Yu, N., Zhang, Y.: DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. In: ACM SIGSAC. pp. 3418-3432 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 468, + 481, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 468, + 481, + 490 + ], + "spans": [ + { + "bbox": [ + 132, + 468, + 481, + 490 + ], + "type": "text", + "content": "68. Sinitsa, S., Fried, O.: Deep Image Fingerprint: Towards Low Budget Synthetic Image Detection and Model Lineage Analysis. In: WACV. pp. 4067-4076 (2024)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 490, + 481, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 490, + 481, + 522 + ], + "spans": [ + { + "bbox": [ + 132, + 490, + 481, + 522 + ], + "type": "text", + "content": "69. Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J.W., Kreps, S., et al.: Release Strategies and the Social Impacts of Language Models. arXiv preprint arXiv:1908.09203 (2019)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 522, + 481, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 522, + 481, + 555 + ], + "spans": [ + { + "bbox": [ + 132, + 522, + 481, + 555 + ], + "type": "text", + "content": "70. Su, J., Zhuo, T.Y., Wang, D., Nakov, P.: DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text. In: Conference on Empirical Methods in Natural Language Processing (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 555, + 481, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 555, + 481, + 588 + ], + "spans": [ + { + "bbox": [ + 132, + 555, + 481, + 588 + ], + "type": "text", + "content": "71. Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., Wei, Y.: Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection. In: CVPR. pp. 28130-28139 (2024)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 588, + 481, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 588, + 481, + 621 + ], + "spans": [ + { + "bbox": [ + 132, + 588, + 481, + 621 + ], + "type": "text", + "content": "72. Tan, C., Zhao, Y., Wei, S., Gu, G., Wei, Y.: Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection. In: CVPR. pp. 12105-12114 (2023)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 621, + 481, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 621, + 481, + 643 + ], + "spans": [ + { + "bbox": [ + 132, + 621, + 481, + 643 + ], + "type": "text", + "content": "73. Tao, M., Bao, B.K., Tang, H., Xu, C.: Galip: Generative adversarial clips for text-to-image synthesis. In: CVPR. pp. 14214-14223 (2023)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 643, + 481, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 643, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 132, + 643, + 481, + 666 + ], + "type": "text", + "content": "74. Wang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now. In: CVPR. pp. 8692-8701 (2020)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 230, + 101 + ], + "type": "text", + "content": "Cozzolino et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 481, + 248 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "type": "text", + "content": "75. Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. ICCV pp. 22445-22455 (2023)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 138, + 481, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 138, + 481, + 160 + ], + "spans": [ + { + "bbox": [ + 130, + 138, + 481, + 160 + ], + "type": "text", + "content": "76. Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. In: ICLR (2023)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 160, + 480, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 160, + 480, + 194 + ], + "spans": [ + { + "bbox": [ + 132, + 160, + 480, + 194 + ], + "type": "text", + "content": "77. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 194, + 480, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 194, + 480, + 216 + ], + "spans": [ + { + "bbox": [ + 132, + 194, + 480, + 216 + ], + "type": "text", + "content": "78. Zhang, X., Karaman, S., Chang, S.F.: Detecting and Simulating Artifacts in GAN Fake Images. In: WIFS. pp. 1-6 (2019)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 216, + 480, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 216, + 480, + 248 + ], + "spans": [ + { + "bbox": [ + 132, + 216, + 480, + 248 + ], + "type": "text", + "content": "79. Zhong, N., Xu, Y., Qian, Z., Zhang, X.: Rich and Poor Texture Contrast: A Simple yet Effective Approach for AI-generated Image Detection. arXiv preprint arXiv:2311.12397v1 (2023)" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 264, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Detection of AI-Generated Images" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_content_list.json b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c79681cc1587398243899e614c926c223c7e87e3 --- /dev/null +++ b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_content_list.json @@ -0,0 +1,1759 @@ +[ + { + "type": "text", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "text_level": 1, + "bbox": [ + 295, + 141, + 707, + 186 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Xinle Cheng $^{1}$ , Congyue Deng $^{2}$ , Adam W. Harley $^{2}$ , Yixin Zhu $^{1,3}$ , Leonidas Guibas $^{2}$", + "bbox": [ + 315, + 213, + 686, + 243 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "congyue@stanford.edu, yixin.zhu@pku.edu.cn, guibas@stanford.edu", + "bbox": [ + 276, + 250, + 725, + 263 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Institute for AI, Peking University, China", + "bbox": [ + 354, + 272, + 645, + 287 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ Department of Computer Science, Stanford University, USA", + "bbox": [ + 295, + 287, + 705, + 301 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{3}$ PKU-WUHAN Institute for Artificial Intelligence, China", + "bbox": [ + 305, + 301, + 694, + 315 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/336204f585cdae64e56576c1f87b995ddb44168ce5fb70f9da29caea739d186f.jpg", + "image_caption": [ + "Fig. 1: Overview. Left: Given two sets of features, $E^{M}, E^{N}$ , and $F^{M}, F^{N}$ , we compute the Laplacian eigenfunction basis with $E^{M}, E^{N}$ , and apply regularizations to the functional map optimization using $F^{M}, F^{N}$ . This method optimizes a mapping in the spectral domain derived from one feature set to achieve a consensus with the other set. Right: With a better understanding of the global image structure, our method produces smoother and more accurate correspondences in a zero-shot manner." + ], + "image_footnote": [], + "bbox": [ + 223, + 335, + 784, + 438 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Correspondences emerge from large-scale vision models trained for generative and discriminative tasks. This has been revealed and benchmarked by computing correspondence maps between pairs of images, using nearest neighbors on the feature grids. Existing work has attempted to improve the quality of these correspondence maps by carefully mixing features from different sources, such as by combining the features of different layers or networks. We point out that a better correspondence strategy is available, which directly imposes structure on the correspondence field: the functional map. Wielding this simple mathematical tool, we lift the correspondence problem from the pixel space to the function space and directly optimize for mappings that are globally coherent. We demonstrate that our technique yields correspondences that are not only smoother but also more accurate, with the possibility of better reflecting the knowledge embedded in the large-scale vision models that we are studying. Our approach sets a new state-of-the-art on various dense correspondence tasks. We also demonstrate our effectiveness in keypoint correspondence and affordance map transfer.", + "bbox": [ + 259, + 554, + 743, + 790 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Functional map $\\cdot$ Zero shot image matching $\\cdot$ Dense correspondence $\\cdot$ Emergent feature property", + "bbox": [ + 259, + 805, + 743, + 833 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 217, + 143, + 374, + 160 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Identifying image correspondence is a crucial task in mid-level computer vision. Recent advancements in large-scale vision models, trained for either generative [36] or discriminative [6,29] tasks, possess emerged capabilities for dense correspondences [1,13,43,55]. This learning is primarily facilitated by computing nearest neighbor matches between image patches with their feature similarities. Notably, the correspondences induced by these models can achieve comparable or even better performances compared to the methods explicitly designed for this purpose. However, a notable limitation arises: these models often struggle to retain the global structure of the correspondences. This can be attributed to the distortions and discontinuities in the nearest-neighbor search process.", + "bbox": [ + 212, + 181, + 787, + 332 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "While contemporary methods [55] have attempted to mitigate this problem by integrating features from different layers and networks, this approach only indirectly confronts the fundamental issue—the lack of structure in the correspondence maps. Fundamentally, point-wise correspondences are inherently susceptible to noise. Therefore, imposing a global structure on the correspondence maps is crucial for attaining high-quality correspondences without supervision", + "bbox": [ + 212, + 335, + 787, + 428 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we leverage functional maps [30] to tackle the above challenge. Originating from computer graphics, functional maps present a robust alternative to point-to-point correspondences [4,17,26]. They represent dense correspondences as linear mappings between function spaces, usually defined on 3D shapes. The key aspect of functional maps is their ability to capture deformations that align one manifold with another. Owing to their low-dimensional yet expressive nature, functional maps effectively incorporate global structures into the matching process. This approach provides a compelling solution to the challenges inherent in traditional point-wise correspondence methods.", + "bbox": [ + 212, + 429, + 787, + 566 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Specifically, we improve zero-shot feature-based correspondence methods by transitioning from the pixel space to the function space, thereby enhancing the method's coherence and effectiveness. Traditional functional maps on manifolds rely on two geometric inputs: the Laplacian operator, which is crucial for computing the eigenfunction basis, and a local geometric descriptor, for the application of regularization losses. We adapt these components to the realm of images by employing visual features extracted from two distinct large vision models. Our approach diverges from traditional methods, which typically identify corresponding pixels between images through nearest neighbor search. Instead, we concentrate on optimizing a linear function map established on the eigenfunction basis defined by the first feature map, with the second feature map serving as a geometric regularizer. This process, notably unsupervised, marks a significant difference from conventional methods. Further augmenting our method's robustness, especially against occlusions, is the incorporation of a transformer module for tackling partial shape matching, as detailed in partial functional maps et al. [2]. Such integration of functional map concepts with feature-based methods in image analysis represents a cohesive and logical advancement in tackling the challenges of correspondence tasks.", + "bbox": [ + 212, + 568, + 787, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We evaluate our framework on dense correspondence across various base networks, demonstrating consistent enhancements in matching accuracy and other functional properties like smoothness compared to the traditional nearest neighbor search. We highlight the qualitative results of our approach on the challenging cases with significant shape variations, viewpoint changes, and occlusions. We further demonstrate our effectiveness on keypoint correspondences and object affordance map transfer, showcasing its versatility in diverse scenarios.", + "bbox": [ + 212, + 146, + 782, + 251 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In summary, our primary contribution is a novel zero-shot framework designed to derive correspondence maps from pre-trained features. Central to our approach is the concept of optimizing a functional map that establishes a relationship between the entire image contents, moving away from the conventional method of direct pixel-to-pixel correspondence searches. Our experimental results, evaluated on various standard datasets, demonstrate that our method produces correspondences that are not only smoother and more accurate but also exhibit greater global coherence compared to previous efforts. We believe that our techniques effectively uncover the underlying correspondence capabilities of the large-scale backbone networks. We hope that our work will serve as an inspiration for future research in general-purpose object correspondence.", + "bbox": [ + 212, + 252, + 787, + 417 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 215, + 441, + 387, + 455 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Emergent correspondence from vision models Deep image networks have demonstrated remarkable robustness to geometric transformations, such as rotation, scaling, and perspective changes, leading to the emergence of dense correspondences [9, 28, 32, 39, 50, 54]. These transformations, predominantly rigid in nature, have been a focal point in previous studies. The research by Amir et al. [1] revealed that features extracted from DINOv1 [6] not only act as effective dense visual descriptors but also naturally induce semantic correspondences without direct supervision. This capability is further amplified in its successor, DINOv2 [29]. Beyond discriminative models, recent explorations have shown that generative models, such as diffusion models, also unveil emergent dense correspondences within their latent features [13, 43, 55]. Intriguingly, Zhang et al. [55] discovered that combining features from DINOv2 [29] with those from Stable Diffusion [36] significantly enhances correspondence quality.", + "bbox": [ + 212, + 462, + 787, + 657 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our study highlights a crucial gap: existing methods lack structural awareness when computing correspondences by nearest-neighbor queries of per-pixel features. Here, we propose representing the correspondence map within a functional space, offering a novel approach to this challenge.", + "bbox": [ + 212, + 659, + 787, + 720 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Semantic correspondence Semantic correspondence [22] seeks to establish pixelwise matches across objects differing in poses, appearances, deformations, or even categories. Traditional approaches generally involve three stages [49]: feature extraction, cost volume construction, and displacement field [45-48] or parameterized transformation regression [15, 16, 33, 34, 40]. However, their reliance on smooth displacement fields or locally affine transformations hinders their ability to model complex object deformations or shape variations effectively.", + "bbox": [ + 212, + 734, + 787, + 839 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 302, + 114, + 730, + 128 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Recent developments, inspired by the classical congealing method [18], focus on aligning multiple objects within the same class using learning techniques like DINOv1 features [10, 27] or GAN-synthesized data [31]. Despite their strong assumptions about data rigidity, these studies suggest that leveraging features and information from diverse tasks can enhance the quality of dense image correspondences. In our work, we further demonstrate that a structure-aware fusion of features learned from multiple tasks can significantly improve the quality of correspondence maps.", + "bbox": [ + 212, + 146, + 787, + 268 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Functional maps Initially introduced by Ovsjanikov et al. [30] and further expanded by Aubry et al. [3], functional maps offer a method to represent shape correspondences as linear transformations between spectral embeddings. This is achieved using compact matrices based on eigenfunction basis. Enhancements in accuracy, efficiency, and robustness have been realized in subsequent studies [4, 14, 17, 26]. Moving away from traditional methods dependent on hand-crafted features [3, 42], recent developments have introduced various learning-based functional map frameworks. These utilize shape features learned via pairwise label supervision [21], geometric priors [11,37], or robust mesh features [5,8,19,41]. While traditionally employed for full-shape correspondence, functional maps have also been adapted to handle partial correspondences [2,35], thus aligning more closely with real-world scenarios.", + "bbox": [ + 212, + 281, + 789, + 462 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "While functional maps are extensively explored for 3D shape representations like meshes and point clouds, their application to 2D images has been limited due to the ambiguous manifold structure of RGB-value representations [51, 52]. Previous attempts at applying these maps to super-pixel image representations and utilizing their eigenfunctions as a basis [51, 52] typically result in significant information loss. This is often due to the coarse nature of pre-segmentation in images and the resultant inconsistency in super-pixel representation. In our work, we address these challenges by using the entire image as input for a large vision model, ensuring a consistent initial representation and stable global structure during transformations by functional maps.", + "bbox": [ + 212, + 463, + 789, + 614 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 215, + 637, + 330, + 652 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Preliminaries", + "text_level": 1, + "bbox": [ + 215, + 669, + 372, + 683 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Functional map Originally introduced in Ovsjanikov et al. [30], the functional map is a method for representing dense correspondences in the function space. This approach is based on the concept of mapping between function spaces defined on manifolds. Specifically, given two manifolds $\\mathcal{M}$ and $\\mathcal{N}$ , we consider the spaces $\\mathcal{F}(\\mathcal{M},\\mathbb{R})$ and $\\mathcal{F}(\\mathcal{N},\\mathbb{R})$ , each comprising all real-valued scalar functions on these manifolds, denoted as $\\varphi^{\\mathcal{M}}:\\mathcal{M}\\to \\mathbb{R}$ and $\\varphi^{\\mathcal{N}}:\\mathcal{N}\\to \\mathbb{R}$ , respectively. We can express a bijective mapping $T:\\mathcal{M}\\rightarrow \\mathcal{N}$ as a linear mapping between these function spaces, as follows:", + "bbox": [ + 212, + 694, + 787, + 815 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nT _ {F}: \\mathcal {F} (\\mathcal {M}, \\mathbb {R}) \\rightarrow \\mathcal {F} (\\mathcal {N}, \\mathbb {R}), \\quad f \\mapsto T _ {F} (f). \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 366, + 825, + 785, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/dadbcc4c64ffed7c21636646208293159305c93b0a59b27453480501dde64093.jpg", + "image_caption": [ + "Fig. 2: Eigenfunctions of the image Laplacian. We visualize the eigenfunctions of the graph Laplacian operator corresponding to the first 5 smallest eigenvalues $\\lambda_1, \\dots, \\lambda_5$ (low frequency) as well as $\\lambda_{10}, \\lambda_{20}, \\lambda_{50}$ (high frequency)." + ], + "image_footnote": [], + "bbox": [ + 240, + 150, + 754, + 333 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To compute these mappings effectively, we expand the function spaces $\\mathcal{F}(\\mathcal{M},\\mathbb{R})$ and $\\mathcal{F}(\\mathcal{N},\\mathbb{R})$ by introducing sets of basis functions, $\\Phi^{\\mathcal{M}} = \\{\\varphi_i^{\\mathcal{M}}\\}$ and $\\Phi^{\\mathcal{N}} = \\{\\varphi_i^{\\mathcal{N}}\\}$ , for $\\mathcal{M}$ and $\\mathcal{N}$ , respectively. Thus, any real-valued function $f\\in \\mathcal{F}(\\mathcal{M},\\mathbb{R})$ can be represented as a linear combination of these basis functions: $f = \\sum_{i}a_{i}\\varphi_{i}^{\\mathcal{M}}$ . Applying the operator $T_{F}$ to $f$ leads to the equation:", + "bbox": [ + 212, + 393, + 787, + 470 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nT _ {F} (f) = T _ {F} \\left(\\sum_ {i} a _ {i} \\varphi_ {i} ^ {\\mathcal {M}}\\right) = \\sum_ {i} a _ {i} T _ {F} \\left(\\varphi_ {i} ^ {\\mathcal {M}}\\right). \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 354, + 477, + 787, + 516 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Each transformed function $T_{F}(\\varphi_{i}^{\\mathcal{M}}) \\in \\mathcal{F}(\\mathcal{N},\\mathbb{R})$ can be further decomposed into a linear combination of $\\varphi_j^\\mathcal{N}$ . Hence, we have $T_{F}(\\varphi_{i}^{\\mathcal{M}}) = \\sum_{j}c_{ij}\\varphi_{j}^{\\mathcal{N}}$ , leading to:", + "bbox": [ + 212, + 522, + 785, + 555 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nT _ {F} (f) = \\sum_ {i} a _ {i} \\sum_ {j} c _ {i j} \\varphi_ {j} ^ {\\mathcal {N}} = \\sum_ {h} \\sum_ {i} a _ {i} c _ {i j} \\varphi_ {j} ^ {\\mathcal {N}}. \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 352, + 563, + 787, + 592 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For simplicity, the function $f$ is represented in a vector form with coefficients $\\mathbf{a} = (a_{1}, a_{2}, \\dots)^{t}$ . Consequently, the transformation $T_{F}$ on $\\mathbf{a}$ is given by $T_{F}(\\mathbf{a}) = \\mathbf{C}\\mathbf{a}$ , where $\\mathbf{C}$ is a matrix with elements $c_{ij}$ , representing the $j$ -th coefficient of $T_{F}(\\varphi_{i}^{\\mathcal{M}})$ in the basis $\\{\\varphi_{j}^{\\mathcal{N}}\\}$ .", + "bbox": [ + 212, + 599, + 787, + 660 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To translate the functional map into point-to-point correspondences, we treat each point as a Dirac delta function in the function space. Specifically, this conversion from the functional to the point-wise map is executed via a nearest neighbor search between the rows of $\\mathbf{C}\\Phi^{\\mathcal{M}}$ and $\\Phi^{\\mathcal{N}}$ .", + "bbox": [ + 212, + 661, + 787, + 720 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Deep partial functional map The functional map framework, while adept at modeling perfect correspondence mappings between complete shapes [30], faces challenges when applied to real-world data that often have missing data and noise. This has led to the development of partial functional maps, as discussed in [2, 35].", + "bbox": [ + 212, + 733, + 787, + 808 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The primary challenge in adapting functional maps to partial shapes is the disruption of core assumptions, such as manifold completeness and bijective", + "bbox": [ + 212, + 809, + 787, + 839 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 302, + 114, + 732, + 128 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "mappings. Atta et al. [2] address this challenge by introducing a feature refinement network, denoted as $g_{\\mathcal{R}}$ , which enhances the robustness of partial functional maps against shape partiality.", + "bbox": [ + 212, + 146, + 782, + 191 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Consider $M$ and $N$ as discretizations of the partial shapes $\\mathcal{M}$ and $\\mathcal{N}$ , respectively. We construct a bipartite graph $(\\mathcal{V},\\mathcal{E})$ , with edges connecting every point $\\mathbf{x} \\in M$ to every point $\\mathbf{y} \\in N$ . The refinement module inputs per-point features $F^{M}$ and $F^{N}$ , and updates these features via message passing on the bipartite graph. This process employs an attention mechanism, formulated as", + "bbox": [ + 212, + 191, + 787, + 268 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nm _ {\\epsilon \\rightarrow i} = \\sum_ {j, (i, j) \\in \\mathcal {E}} \\operatorname {s o f t m a x} _ {j} \\left(q _ {i} ^ {T} k _ {j} / \\sqrt {d}\\right) v _ {j}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 375, + 276, + 787, + 308 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "and the final updated value of node $i$ is given by", + "bbox": [ + 214, + 315, + 568, + 330 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nx _ {0} = x _ {0} + x _ {\\text {p o s}}, \\quad x _ {i + 1} = x _ {i} + \\operatorname {M L P} \\left( \\right.\\left[ \\right. x _ {i} \\left. \\right\\| m _ {\\epsilon \\rightarrow i} \\left. \\right]\\left. \\right), \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 343, + 340, + 785, + 357 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $x_{\\mathrm{pos}}$ represents the positional embedding, $[\\cdot \\| \\cdot ]$ denotes concatenation, and MLP is a multilayer perceptron with ReLU activations and instance normalization. The refined features on the shape pair are denoted as $g_{\\mathcal{R}}(F^M)$ and $g_{\\mathcal{R}}(F^{N})$ .", + "bbox": [ + 212, + 364, + 787, + 411 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To understand this message passing process, consider a region $\\Omega$ exclusive to shape $M$ and absent in shape $N$ . Let $F_{\\Omega}$ denote a feature assignment function restricted to $\\Omega$ . When projecting these features onto the function basis, the functional map equation becomes:", + "bbox": [ + 212, + 411, + 785, + 470 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {C} \\varphi^ {M} F _ {\\Omega} (M) = \\varphi^ {N} F _ {\\Omega} (N). \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 413, + 479, + 785, + 497 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "This equation holds true if and only if $F_{\\Omega}(\\mathbf{x}) = 0$ implies $F_{\\Omega}(\\mathbf{y}) = 0$ for $\\mathbf{x} \\in M, \\mathbf{y} \\in N$ . Hence, effective communication between the regions on $M$ and $N$ is crucial, enabling feature synchronization over overlapping regions while diminishing the influence of features outside these overlaps.", + "bbox": [ + 214, + 505, + 787, + 566 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.2 Feature Consensus with Functional Maps", + "text_level": 1, + "bbox": [ + 214, + 588, + 602, + 604 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "An overview of our framework is depicted in Fig. 1. Given a pair of images $M$ and $N$ , our setup includes two distinct pixel-wise feature extraction networks, yielding two sets of features: $E^{M}, E^{N}$ and $F^{M}, F^{N}$ . For instance, $E^{M}$ and $E^{N}$ might be DINOv2 features, while $F^{M}$ and $F^{N}$ could be Stable Diffusion features.", + "bbox": [ + 212, + 613, + 787, + 674 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The primary objective is to derive a functional map $\\mathbf{C}$ between the two function spaces $\\mathcal{F}(M,\\mathbb{R})$ and $\\mathcal{F}(N,\\mathbb{R})$ . The core of our method involves using $E^{M}$ and $E^{N}$ to calculate the Laplacian eigenfunction basis and apply $F^{M}$ and $F^{N}$ for introducing regularizations in optimizing the functional map. In essence, our method optimizes the functional map derived from one set of features to achieve a \"consensus\" with the other set, providing a more comprehensive and robust mapping between the function spaces of the images.", + "bbox": [ + 214, + 674, + 787, + 780 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Image Laplacian from visual features For an image feature of dimensions $(h, w)$ , where $h$ is the height and $w$ is the width, we view it as a grid graph comprising $h \\times w$ nodes; each node is connected to its four adjacent neighbors. However, a", + "bbox": [ + 214, + 794, + 787, + 840 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "graph constructed naively would lack awareness of the image content, and its Laplacian eigenspaces would correspond to the conventional Fourier frequency space.", + "bbox": [ + 212, + 146, + 784, + 191 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Instead, we assign weights to the graph edges based on the first set of image features $E^{M}$ and $E^{N}$ . For two adjacent patches $\\mathbf{x}$ and $\\mathbf{y}$ in image $M$ (a similar definition applies for $N$ ), the weight of the edge between them is given by:", + "bbox": [ + 212, + 191, + 785, + 237 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\| e _ {\\mathbf {x y}} \\| = \\exp \\left(- \\frac {\\| E _ {\\mathbf {x}} ^ {M} - E _ {\\mathbf {y}} ^ {M} \\|}{\\sigma}\\right), \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 395, + 246, + 787, + 282 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\sigma$ denotes the median of all the feature values.", + "bbox": [ + 212, + 290, + 598, + 304 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Next, we compute the graph Laplacian $\\varDelta_M$ and utilize its eigenfunctions as the basis. In alignment with previous research, we adopt a reduced function space spanned by the first 200 eigenfunctions. To compute the Laplacian eigen decompositions, we employ the LOBPCG algorithm, known for its efficiency. Fig. 2 presents examples of these Laplacian eigenfunctions.", + "bbox": [ + 212, + 305, + 785, + 381 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Feature as function regularizer For the second set of features $F^M$ and $F^N$ , we employ them as descriptor functions and impose a constraint on $\\mathbf{C}$ such that $\\mathbf{C}F^M \\approx F^N$ . Given the incompleteness of shape correspondences in image pairs, due for example to occlusion within the object and by other objects, we utilize the attention-based feature refinement network $g_{\\mathcal{R}}$ from deep partial functional maps [2]. This network refines the features, which are then projected onto the function basis:", + "bbox": [ + 212, + 393, + 787, + 497 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {F} ^ {M} = \\varphi^ {M} g _ {\\mathcal {R}} \\left(F ^ {M}\\right), \\quad \\tilde {F} ^ {N} = \\varphi^ {N} g _ {\\mathcal {R}} \\left(F ^ {N}\\right). \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 367, + 498, + 785, + 515 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The descriptor-preserving loss applied to these refined features is formulated as:", + "bbox": [ + 212, + 518, + 781, + 534 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {f e a t}} = \\left\\| \\mathbf {C} \\tilde {F} ^ {M} - \\tilde {F} ^ {N} \\right\\| _ {2}. \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 421, + 541, + 785, + 559 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To enhance the regularity of the functional map, our optimization objective incorporates two additional regularization terms. First, we integrate a compactness regularization into the functional map matrix:", + "bbox": [ + 212, + 566, + 785, + 612 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {d i a g}} = \\left(\\left| \\lambda_ {i} ^ {M} - \\lambda_ {j} ^ {N} \\right| c _ {i j}\\right) ^ {2}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 411, + 621, + 785, + 646 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\lambda_{i}^{M}$ and $\\lambda_{j}^{N}$ represent the $i$ -th and $j$ -th eigenvalues of the graph Laplacian matrices $\\Delta_{M}$ and $\\Delta_{N}$ , respectively. For images with similar spectral distributions of eigenvalues, minimizing $\\mathcal{L}_{\\mathrm{diag}}$ encourages a near-diagonal structure in $\\mathbf{C}$ . This regularization is based on the principle that eigenvalues' magnitudes are indicative of the frequencies of their corresponding eigenfunctions, and eigenfunctions with similar frequencies are more likely to correspond, as suggested by Huang et al. [14].", + "bbox": [ + 212, + 654, + 787, + 760 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Next, we introduce a bijectivity constraint to the functional map:", + "bbox": [ + 238, + 761, + 712, + 776 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {C} ^ {M \\rightarrow N} \\cdot \\mathbf {C} ^ {N \\rightarrow M} = \\mathbf {I}. \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 431, + 782, + 785, + 800 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "This can be interpreted as a special instance of the cycle-consistency regularization for image collections as in Wang et al. [51] when the number of images is two.", + "bbox": [ + 212, + 809, + 785, + 839 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 300, + 114, + 732, + 130 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 114, + 785, + 126 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To implement this constraint, in line with Wang et al. [51], we define two sets of estimizable latent bases: $\\mathbf{Z}^M = \\{Z_i^M\\}$ and $\\mathbf{Z}^N = \\{Z_i^N\\}$ , corresponding to the function spaces $\\mathcal{F}(M,\\mathbb{R})$ and $\\mathcal{F}(N,\\mathbb{R})$ of both source and target images. The consistency loss is then defined as:", + "bbox": [ + 212, + 146, + 787, + 205 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {c o n s}} = \\left\\| \\mathbf {C Z} ^ {M} - \\mathbf {Z} ^ {N} \\right\\| _ {2}. \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 419, + 220, + 787, + 244 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To prevent degenerate solutions where $\\mathbf{Z}^M$ and $\\mathbf{Z}^N$ could be trivially zero, we introduce an additional constraint requiring both $\\mathbf{Z}^M$ and $\\mathbf{Z}^N$ to satisfy $\\mathbf{Z}^t\\mathbf{Z} = \\mathbf{I}$ . Integrating all these components, our final optimization objective is:", + "bbox": [ + 214, + 256, + 789, + 305 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\operatorname {a r g m i n} _ {\\mathbf {C}} \\mathcal {L} _ {\\text {f e a t}} + \\lambda_ {\\text {d i a g}} \\mathcal {L} _ {\\text {d i a g}} + \\lambda_ {\\text {c o n s}} \\mathcal {L} _ {\\text {c o n s}}, \\tag {13} \\\\ s. t. \\quad (\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {Z} ^ {M} = \\mathbf {I}, (\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {Z} ^ {N} = \\mathbf {I}. \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 372, + 319, + 785, + 352 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Optimization We jointly optimize the weights of the image feature refinement network $g_{\\mathcal{R}}$ , the functional map $\\mathbf{C}$ , and the latent basis $\\mathbf{Z}^{M}$ and $\\mathbf{Z}^{N}$ for the input image pair. The full loss function is formulated as:", + "bbox": [ + 214, + 373, + 787, + 419 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathcal {L} = \\mathcal {L} _ {\\mathrm {f e a t}} + \\lambda_ {\\mathrm {d i a g}} \\mathcal {L} _ {\\mathrm {d i a g}} + \\lambda_ {\\mathrm {c o n s}} \\mathcal {L} _ {\\mathrm {c o n s}} \\\\ + \\lambda_ {Z} \\left(\\operatorname {t r} \\left((\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {W} \\mathbf {Z} ^ {M}\\right) + \\operatorname {t r} \\left((\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {W} \\mathbf {Z} ^ {N}\\right)\\right) \\tag {14} \\\\ + \\lambda_ {\\mathrm {r e g}} \\left(\\left\\| (\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {Z} ^ {M} - \\mathbf {I} \\right\\| _ {2} + \\left\\| (\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {Z} ^ {N} - \\mathbf {I} \\right\\| _ {2}\\right), \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 328, + 434, + 785, + 500 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "where $\\mathbf{W} = \\mathbf{I} + \\mathbf{C}^t\\mathbf{C}$ . The terms $\\operatorname{tr}(\\mathbf{Z}^t\\mathbf{W}\\mathbf{Z})$ are variations of Eq. (13) with $\\mathbf{Z}^M$ and $\\mathbf{Z}^N$ as the primary variables rather than $\\mathbf{C}$ , as discussed in Wang et al. [51].", + "bbox": [ + 212, + 512, + 787, + 545 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 214, + 574, + 375, + 590 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Dataset We evaluate our method primarily on the TSS dataset [44], comprising 400 image pairs from three subsets: FG3DCAR [20], JODS [38], and PASCAL [12], all of which include dense correspondence annotations. Additionally, we perform evaluations on the SPair-71k dataset [24], which features sparse annotations of keypoint correspondences across 18 categories. For this dataset, we sample 20 pairs from each category for our analysis, following the prior work [55].", + "bbox": [ + 212, + 606, + 787, + 699 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Baselines Our comparison primarily focuses on emergent correspondences from various visual models and feature fusion techniques. We utilize feature extraction networks such as DINOv1 (ViT-S/8), DINOv2 (ViT-S/14 and ViT-B/14), and Stable Diffusion, which are prevalent and extensively researched in a wide range of visual perception tasks. In terms of feature fusion, we benchmark against the feature concatenation approach proposed by Zhang et al. [55], testing different combinations of features. Additionally, we list other methods designed for image correspondence tasks that involve stronger supervision or task-specific designs.", + "bbox": [ + 212, + 719, + 787, + 840 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/8c82d250d84a0f9d2f41d837e8258024344eb49759fdce5a473693427cf7de2b.jpg", + "table_caption": [ + "Table 1: Results for dense correspondences on TSS [44]. The baselines are classified into three categories based on their training setups: supervised, unsupervised with task-specific designs, and zero-shot methods without task- or dataset-specific designs. * indicates backbones fine-tuned on this dataset." + ], + "table_footnote": [], + "table_body": "
SettingMethodFG3DCarJODSPascalAvg.
SupervisedSCOT [23]95.381.357.778.1
CATs* [7]92.178.964.278.4
PWarpC-CATs* [49]95.585.085.588.7
Unsupervised task-specificCNNGeo [33]90.176.456.374.4
PARN [15]89.575.971.278.8
GLU-Net [46]93.273.371.179.2
Semantic-GLU-Net [48]95.382.278.285.2
Unsupervised zero-shotDINOv1-ViT-S/8 [1]68.744.736.752.7
DINOv2-ViT-B81.268.451.569.4
Stable Diffusion (SD)92.162.648.472.5
Concat. DINOv2 + SD [55]92.973.859.678.7
FMap DINOv2(basis) + DINOv2(loss)83.569.252.771.0
FMap SD(basis) + SD(loss)80.063.451.567.8
FMap DINOv2(basis) + SD(loss) (ours)84.870.453.572.2
FMap DINOv2(loss) + SD(basis) (ours)93.174.059.978.9
", + "bbox": [ + 215, + 204, + 782, + 431 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Evaluation metrics For both dense and sparse correspondences, we adopt the Percentage of Correct Keypoints (PCK) metric [53] with a threshold of $\\kappa \\cdot \\max(h, w)$ , where $\\kappa$ is a positive integer, and $(h, w)$ represents the image dimensions in the TSS dataset or the instance bounding-box dimensions in the SPair-71k dataset. Additionally, for dense correspondences on the TSS dataset, we assess spatial coherence using a smoothness metric [55]. This involves extracting a semantic flow (i.e., a 2D motion vector field from the source to the target image) and computing its first-order difference. In the case of sparse correspondences on the Spair-71k dataset, we further calculate the Mean Squared Error (MSE) on the keypoints to quantify mapping distortions.", + "bbox": [ + 212, + 450, + 789, + 602 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Dense Correspondence", + "text_level": 1, + "bbox": [ + 214, + 628, + 450, + 643 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 1 presents the results of dense correspondences on the TSS dataset. Following [55], we majorly compare to other zero-shot unsupervised methods, among which we achieve the best performances. Specifically, we outperform Zhang et al. [55] with the same pair of features by utilizing the features in a more structure-aware manner. We also list as references the performances of fully supervised methods and unsupervised methods with task-specific training.", + "bbox": [ + 212, + 657, + 787, + 747 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We also evaluate an ablated version of our framework by computing the basis functions and losses using the same set of features (the third and fourth rows from the last), which give significantly worse results compared to our full model. On the other side, it can still give better results than directly using one feature with nearest neighbor queries (for example, FMap DINOv2(basis) + DINOv2(loss) versus DINOv2-ViT-B/14). This shows that structure-awareness", + "bbox": [ + 212, + 750, + 787, + 840 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 300, + 114, + 730, + 130 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/a119e8696e30fcd8daddfaa65976a80350bc418d0d8c63c586ebc3e00ccb69e1.jpg", + "image_caption": [ + "Fig. 3: Dense correspondences on SPair-71k [24] Image Pairs. Each example displays pixel-wise mappings from source to target images in rainbow colors (second column for source coordinates, fourth and fifth columns for computed target coordinates) and color transfers (last two columns). Specifically, we demonstrate the challenging examples including significant viewpoint changes (first and second row), shape variations (first and third row), and occlusions (third row). Our framework achieves more consistent mappings with its global structure-awareness." + ], + "image_footnote": [], + "bbox": [ + 246, + 147, + 750, + 378 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "can naturally lead to better correspondences even without introducing any additional information.", + "bbox": [ + 212, + 491, + 784, + 518 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Fig. 3 shows the qualitative results of dense correspondences computed with the DINOv2-ViT-B/14 and Stable Diffusion networks. We compare side-by-side the feature fusion results using pre-normalized concatenation [55] and our method. In all these examples, our framework provides smoother and more consistent mappings with its global structure-awareness. Specifically, we highlight two challenging examples: the airplanes in the second row with large camera-view changes, and the birds in the third row with large shape variations as well as occlusions. We also visualize the matrices for the linear functional maps in Fig. 6.", + "bbox": [ + 212, + 521, + 787, + 642 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Feature fusion with different networks Tab. 2 presents the accuracy and smoothness of correspondences derived from features of various network backbones. When compared to using individual features or their concatenation [55], our functional-map-based framework demonstrates superior results in both metrics across all tested configurations.", + "bbox": [ + 212, + 657, + 787, + 734 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Feature fusion with different layers Tab. 3 presents the results of fusing features from different layers within the same network. Our experiments involve layers 9 and 11 of DINOv2-ViT-S/14 and DINOv2-ViT-B/14. In all tested setups, our framework demonstrates superior performance compared to baseline methods.", + "bbox": [ + 212, + 750, + 784, + 809 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Additionally, a comparative analysis was performed on the choice of layers for DINOv2-ViT-B/14, specifically by fusing the features of layer 11 with those of", + "bbox": [ + 212, + 809, + 785, + 839 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/313dc3431d594e05cb604b0fed688d2864f7d05633af8d8da46b1ec93d31a521.jpg", + "table_caption": [ + "Table 2: Fusing the features from different networks." + ], + "table_footnote": [], + "table_body": "
MethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv1-ViT-S/8raw53.976.846.112.90
DINOv2-ViT-S/14raw69.685.030.87.98
DINOv2-ViT-B/14raw69.487.830.910.46
Stable Diffusion (SD)raw72.583.837.56.41
DINOv1-ViT-S/8Concat. [55]69.988.131.010.33
+ DINOv2-ViT-B/14FMap (ours)72.290.327.77.95
DINOv2-ViT-S/14 + SDConcat. [55]78.189.927.56.58
FMap (ours)71.590.026.36.47
DINOv2-ViT-B/14 + SDConcat. [55]78.790.726.46.81
FMap (ours)78.991.126.15.74
", + "bbox": [ + 235, + 162, + 767, + 349 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/aadd44f57c3b3437226295df1d8765a4b0285b63a5bda6ce51410c51fb59851e.jpg", + "table_caption": [ + "Table 3: Fusing the features from different layers of the same network." + ], + "table_footnote": [], + "table_body": "
BackboneMethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv2-ViT-S/14Layer967.284.836.59.64
Layer1170.888.131.09.25
Concat. [55]70.588.131.09.25
FMap (ours)70.889.129.16.60
DINOv2-ViT-B/14Layer957.285.434.510.66
Layer1169.487.830.910.46
Concat. [55]70.087.930.910.24
FMap (ours)70.689.825.98.27
", + "bbox": [ + 256, + 382, + 746, + 527 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "layers 8, 9, 10, and layer 11 tokens. The results, as depicted in Tab. 4, indicate that our functional map approach consistently surpasses both raw and concatenated features across all layer combinations. We also observed that greater feature distinction occurs when the two layers are more distant from each other. Our framework effectively leverages this distinction, extracting better correspondences by integrating their information. As shown in Tab. 4, optimal performance in EPE is achieved using features from layers 8 and 11.", + "bbox": [ + 212, + 546, + 787, + 652 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.2 More Results", + "text_level": 1, + "bbox": [ + 215, + 676, + 375, + 690 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Keypoint correspondence Tab. 5 presents the results for sparse keypoint correspondences on SPair-71k [24]. Compared to feature concatenation [55], our method demonstrates comparable or higher PCK (with different thresholds) and exhibits lower MSE errors. Note that the selected keypoints are extremely sparse on the images, which could potentially introduce sampling biases compared to evaluations of dense correspondences.", + "bbox": [ + 212, + 703, + 787, + 794 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Fig. 4 showcases qualitative keypoint matching results. Our method is compared side-by-side with results obtained using feature concatenation, where our approach consistently demonstrates robustness in these challenging scenarios", + "bbox": [ + 212, + 795, + 787, + 839 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 300, + 114, + 730, + 128 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 114, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/10550a948f61db57b91303769278e6fbad298c41ec4b5e040f5cd0bc001d2cea.jpg", + "table_caption": [ + "Table 4: Results on different layer choices for feature fusion. This experiment involves DINOv2-ViT-B/14, wherein its layer 11 features (values) are fused with layers 8, 9, 10, and layer 11 tokens, respectively." + ], + "table_footnote": [ + "(a) Image pairs with similar geometric properties. (a) The baseline method incorrectly maps (a) the right ear of the horse to the left ear, (b) the right ear of the cow to the left ear, and (c) a point corresponding to the front feet of the horse to the hind feet." + ], + "table_body": "
MethodLayer 8Layer 9Layer 10Layer 11 token
EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓
Raw [1]59.116.1056.816.0656.815.4053.313.20
Concat. [55]53.514.8055.413.9056.716.7055.316.10
FMap (ours)41.811.9545.29.5241.912.4345.310.65
Concat.
FMap (ours)
", + "bbox": [ + 236, + 189, + 761, + 377 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/9ce91c64156e4470f510c87c8e5f57e33a827907c0a7d23026015ea296047627.jpg", + "image_caption": [ + "Fig. 4: Sparse keypoint correspondences on SPair-71k [24] image pairs. Correct matches are connected with blue lines and incorrect matches with red lines." + ], + "image_footnote": [ + "(b) Image pairs with significant differences in shapes and viewpoints. The baseline method incorrectly maps (a) all points on the pot to the plant, (b) a point on the child's ear to the woman's cheek, and (c) a point at the seat corner to another chair's armrest." + ], + "bbox": [ + 250, + 415, + 759, + 526 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "and effectively captures the geometric properties of the features. Fig. 4a further illustrates the effectiveness of our method in scenarios where the target image contains many similar points, like the legs of a horse. In contrast, the baseline struggles to capture the global structure, often leading to mappings of similar but incorrect points.", + "bbox": [ + 212, + 613, + 787, + 688 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Affordance transfer We further showcase an application of our method in transferring tool affordances between images from the RGB-D Part Affordance Dataset [25]. This dataset features different types of affordances annotated on each object, represented as heat maps. Fig. 5 illustrates our results in transferring these affordance heat maps. Such distributional functions across pixels pose a challenge to raw pixel-wise maps due to the potential distortion of their overall structure during interpolation. However, these functions can be naturally modeled with functional maps, as our approach demonstrates.", + "bbox": [ + 212, + 719, + 787, + 839 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/0f574568f5ca9b5d118bf0340380701c3a17daa676825e38452a370d923592fd.jpg", + "table_caption": [ + "Table 5: Results for sparse keypoint correspondences on SPair-7k [24]. All results in this experiment are with the DINOv2-ViT-B/14 backbone." + ], + "table_footnote": [], + "table_body": "
MethodPCK@0.1↑PCK@0.2↑MSE↓
DINOv252.368.0105.0
Stable Diffusion51.264.1120.5
Concat. [55]57.272.297.2
FMap (ours)55.372.688.0
", + "bbox": [ + 334, + 176, + 669, + 262 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/f7405dbb95a23c27cd1304dc54112e4f0bb28731c4800b11ebf3c6283602f64f.jpg", + "image_caption": [ + "Fig. 5: Transferring tool affordances represented as heat maps. We treat affordance heat maps as functions defined on the source and the target image. By optimizing the functional map between the source and the target, we manage to transfer the function after applying the functional map to it directly following Eq. (1). We employ features from DINOV2-ViT-B/14 and Stable Diffusion to compute the functional maps in this experiment." + ], + "image_footnote": [], + "bbox": [ + 243, + 272, + 356, + 426 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/4c7dc59a23d5c3244e219f611a09a5fcc497b29752c3fd2440d5803b07210426.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 374, + 272, + 488, + 426 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/e96b43e66ab5f38b47ef38162204ed0b775c87c9a44e4f8c07f8e3d5a4a775ef.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 508, + 272, + 620, + 426 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/1527ee56ddff30182621917b926cd5b735e57f5d3c6f215147d24eb439b75f97.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 640, + 272, + 754, + 426 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Ablation Studies In addition to the feature ablations shown in Tab. 1 and discussed in Sec. 4.1, we also present an ablation on the regularization terms for the functional map optimization. Tab. 6 shows the results optimized with different regularization losses. The diagonality and consistency regularizations greatly improve the accuracy of the mapping. Fig. 6 visualizes the functional map matrices with and without the regularizations. The near-diagonal mappings are preferred because they match the function basis with similar frequencies.", + "bbox": [ + 212, + 536, + 787, + 643 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "5 Discussions", + "text_level": 1, + "bbox": [ + 215, + 671, + 362, + 686 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "As shown in Sec. 4.1, our functional map framework effectively integrates features from different network layers. This integration, particularly from just two distinct layers, outperforms the conventional approach of using same-layer features or naively concatenating different features. This finding opens up promising avenues for enhancing the generalization capabilities of large-scale vision models without additional fine-tuning.", + "bbox": [ + 212, + 702, + 787, + 792 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Moreover, the interpretability of learned features in the functional map framework is crucial, particularly in domains like medical imaging or autonomous systems. Our approach, as shown in Fig. 3, enables impressive image editing", + "bbox": [ + 212, + 794, + 787, + 840 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 300, + 114, + 732, + 128 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/a99b7b97f8593675eee2b24c2d04422142738a02bb82bf246f76123d0cc59b55.jpg", + "image_caption": [ + "Fig. 6: Functional map matrices with and without regularization losses. Enforcing the compactness loss (Eq. (10)) centers the non-zero matrix entries around the diagonals to match the function basis of similar frequencies." + ], + "image_footnote": [], + "bbox": [ + 248, + 148, + 754, + 308 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/389e9131e63c93c33c7aaadb8317ed448d7e5e8e5f6411c173d1eeb0a7e8b122.jpg", + "table_caption": [ + "Table 6: Ablation on the loss terms. All results in the experiment are with DINOv2-ViT-B/14 and Stable Diffusion on the SPair-71k dataset." + ], + "table_footnote": [], + "table_body": "
LossPCK@0.1↑PCK@0.2↑MSE↓
Lfeat (no regularization)44.665.595.3
Lfeat + Ldiag52.969.597.9
Lfeat + Lcons52.869.7100.3
Lfeat + Ldiag + Lcons (full loss)55.372.688.0
", + "bbox": [ + 284, + 391, + 715, + 477 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "outcomes without generative models. This leads to the intriguing possibility of combining our method with generative models to enhance image quality.", + "bbox": [ + 212, + 491, + 787, + 523 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "6 Conclusions", + "text_level": 1, + "bbox": [ + 214, + 536, + 370, + 551 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "The emergence of correspondences from large-scale vision models not explicitly trained for this task is noteworthy. While nearest-neighbor analyses provide a direct exploration, they overlook the structure inherent not only in the image contents but also in the model features. Our work leverages this embedded structure via functional maps, aiming to generate point-wise accurate and globally coherent correspondences. Despite its simplicity, it significantly enhances the matching results with zero-shot inference on image pairs without additional supervision or task-specific training. While the core concepts of our approach are rooted in 3D shape correspondence literature from graphics [30], our implementation using deep feature-based functional maps bridges this area with cutting-edge vision research.", + "bbox": [ + 212, + 555, + 787, + 720 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Limitations and future work The structure-awareness of functional maps relies on the manifold assumption of its underlying domain, making our current framework more suitable for object-centric images than complex scenes with diverse compositionalities. Examples of the latter include matching a horse to a herd of horses or matching two indoor scenes. However, this issue might be addressed using additional image segmentation techniques that decompose the image into objects and parts, or by exploring matches between quotient spaces.", + "bbox": [ + 212, + 733, + 787, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 215, + 143, + 321, + 159 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Amir, S., Gandelsman, Y., Bagon, S., Dekel, T.: Deep vit features as dense visual descriptors. arXiv preprint arXiv:2112.05814 2(3), 4 (2021)", + "2. Attaiki, S., Pai, G., Ovsjanikov, M.: Dpfm: Deep partial functional maps (2021)", + "3. Aubry, M., Schlickewei, U., Cremers, D.: The wave kernel signature: A quantum mechanical approach to shape analysis. In: ICCV Workshops (2011)", + "4. Burghard, O., Dieckmann, A., Klein, R.: Embedding shapes with green's functions for global shape matching. Computers & Graphics 68, 1-10 (2017)", + "5. Cao, D., Bernard, F.: Unsupervised deep multi-shape matching. In: ECCV (2022)", + "6. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: ICCV (2021)", + "7. Cho, S., Hong, S., Jeon, S., Lee, Y., Sohn, K., Kim, S.: Cats: Cost aggregation transformers for visual correspondence. Advances in Neural Information Processing Systems 34, 9011-9023 (2021)", + "8. Donati, N., Corman, E., Ovsjanikov, M.: Deep orientation-aware functional maps: Tackling symmetry issues in shape matching. In: CVPR (2022)", + "9. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T.: D2-net: A trainable cnn for joint description and detection of local features. In: CVPR (2019)", + "10. Gupta, K., Jampani, V., Esteves, C., Shrivastava, A., Makadia, A., Snavely, N., Kar, A.: ASIC: Aligning sparse in-the-wild image collections. arXiv preprint arXiv:2303.16201 (2023)", + "1. Halimi, O., Litany, O., Rodola, E., Bronstein, A.M., Kimmel, R.: Unsupervised learning of dense shape correspondence. In: CVPR (2019)", + "2. Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011)", + "3. Hedlin, E., Sharma, G., Mahajan, S., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: Unsupervised semantic correspondence using stable diffusion. arXiv preprint arXiv:2305.15581 (2023)", + "4. Huang, Q., Wang, F., Guibas, L.: Functional map networks for analyzing and exploring large shape collections. ACM TOG 33(4), 1-11 (2014)", + "5. Jeon, S., Kim, S., Min, D., Sohn, K.: Parn: Pyramidal affine regression networks for dense semantic correspondence. In: ECCV (2018)", + "6. Kim, S., Lin, S., Jeon, S.R., Min, D., Sohn, K.: Recurrent transformer networks for semantic correspondence (2018)", + "7. Kovnatsky, A., Bronstein, M.M., Bronstein, A.M., Glashoff, K., Kimmel, R.: Coupled quasi-harmonic bases. In: Comput. Graph. Forum (2013)", + "8. Learned-Miller, E.G.: Data driven image models through continuous joint alignment IEEE TPAMI 28(2), 236-250 (2005)", + "9. Li, L., Donati, N., Ovsjanikov, M.: Learning multi-resolution functional maps with spectral attention for robust shape matching (2022)", + "20. Lin, Y.L., Morariu, V.I., Hsu, W., Davis, L.S.: Jointly optimizing 3d model fitting and fine-grained classification. In: ECCV (2014)", + "21. Litany, O., Remez, T., Rodola, E., Bronstein, A., Bronstein, M.: Deep functional maps: Structured prediction for dense shape correspondence. In: ICCV (2017)", + "22. Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. IEEE TPAMI 33(5), 978-994 (2010)", + "23. Liu, Y., Zhu, L., Yamada, M., Yang, Y.: Semantic correspondence as an optimal transport problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4463-4472 (2020)" + ], + "bbox": [ + 225, + 172, + 785, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 302, + 114, + 730, + 128 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "24. Min, J., Lee, J., Ponce, J., Cho, M.: Spair-71k: A large-scale benchmark for semantic correspondence. arXiv preprint arXiv:1908.10543 (2019)", + "25. Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features (2015)", + "26. Nogneng, D., Ovsjanikov, M.: Informative descriptor preservation via commutativity for shape matching. In: Comput. Graph. Forum (2017)", + "27. Ofri-Amar, D., Geyer, M., Kasten, Y., Dekel, T.: Neural congealing: Aligning images to a joint semantic atlas. In: CVPR (2023)", + "28. Ono, Y., Trulls, E., Fua, P., Yi, K.M.: Lf-net: Learning local features from images (2018)", + "29. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)", + "30. Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional maps: a flexible representation of maps between shapes. ACM TOG 31(4), 1-11 (2012)", + "31. Peebles, W., Zhu, J.Y., Zhang, R., Torralba, A., Efros, A.A., Shechtman, E.: Gan-supervised dense visual alignment. In: CVPR (2022)", + "32. Revaud, J., De Souza, C., Humenberger, M., Weinzaepfel, P.: R2d2: Reliable and repeatable detector and descriptor (2019)", + "33. Rocco, I., Arandjelovic, R., Sivic, J.: Convolutional neural network architecture for geometric matching. In: CVPR (2017)", + "34. Rocco, I., Arandjelovic, R., Sivic, J.: End-to-end weakly-supervised semantic alignment. In: CVPR (2018)", + "35. Rodola, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial functional correspondence. In: Comput. Graph. Forum (2017)", + "36. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022)", + "37. Roufosse, J.M., Sharma, A., Ovsjanikov, M.: Unsupervised deep learning for structured shape matching. In: ICCV (2019)", + "38. Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: CVPR (2013)", + "39. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: Learning feature matching with graph neural networks. In: CVPR (2020)", + "40. Seo, P.H., Lee, J., Jung, D., Han, B., Cho, M.: Attentive semantic alignment with offset-aware correlation kernels. In: ECCV (2018)", + "41. Sharp, N., Attaiki, S., Crane, K., Ovsjanikov, M.: Diffusionnet: Discretization agnostic learning on surfaces. ACM TOG 41(3), 1-16 (2022)", + "42. Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Comput. Graph. Forum (2009)", + "43. Tang, L., Jia, M., Wang, Q., Phoo, C.P., Hariharan, B.: Emergent correspondence from image diffusion. arXiv preprint arXiv:2306.03881 (2023)", + "44. Taniai, T., Sinha, S.N., Sato, Y.: Joint recovery of dense correspondence and cosegmentation in two images. In: CVPR (2016)", + "45. Truong, P., Danelljan, M., Gool, L.V., Timofte, R.: Gocor: Bringing globally optimized correspondence volumes into your neural network (2020)", + "46. Truong, P., Danelljan, M., Timofte, R.: Glu-net: Global-local universal network for dense flow and correspondences. In: CVPR (2020)", + "47. Truong, P., Danelljan, M., Van Gool, L., Timofte, R.: Learning accurate dense correspondences and when to trust them. In: CVPR (2021)" + ], + "bbox": [ + 215, + 147, + 784, + 839 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Cheng et al.", + "bbox": [ + 271, + 114, + 354, + 128 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "48. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Warp consistency for unsupervised learning of dense correspondences. In: ICCV (2021)", + "49. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Probabilistic warp consistency for weakly-supervised semantic correspondences. In: CVPR (2022)", + "50. Tyszkiiewicz, M., Fua, P., Trulls, E.: Disk: Learning local features with policy gradient (2020)", + "51. Wang, F., Huang, Q., Guibas, L.J.: Image co-segmentation via consistent functional maps. In: ICCV (2013)", + "52. Wang, F., Huang, Q., Ovsjanikov, M., Guibas, L.J.: Unsupervised multi-class joint image segmentation. In: CVPR (2014)", + "53. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE TPAMI 35(12), 2878-2890 (2012)", + "54. Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: Lift: Learned invariant feature transform In: ECCV (2016)", + "55. Zhang, J., Herrmann, C., Hur, J., Cabrera, L.P., Jampani, V., Sun, D., Yang, M.H.: A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. arXiv preprint arXiv:2305.15347 (2023)" + ], + "bbox": [ + 215, + 146, + 784, + 383 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Zero-Shot Image Feature Consensus with Deep Functional Maps", + "bbox": [ + 302, + 114, + 730, + 128 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 16 + } +] \ No newline at end of file diff --git a/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_model.json b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_model.json new file mode 100644 index 0000000000000000000000000000000000000000..668cb384869031abaffab9ff3a3bbaee1efecc4d --- /dev/null +++ b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_model.json @@ -0,0 +1,2401 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.296, + 0.142, + 0.709, + 0.187 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "text", + "bbox": [ + 0.316, + 0.214, + 0.687, + 0.244 + ], + "angle": 0, + "content": "Xinle Cheng\\(^{1}\\), Congyue Deng\\(^{2}\\), Adam W. Harley\\(^{2}\\), Yixin Zhu\\(^{1,3}\\), Leonidas Guibas\\(^{2}\\)" + }, + { + "type": "text", + "bbox": [ + 0.277, + 0.251, + 0.726, + 0.265 + ], + "angle": 0, + "content": "congyue@stanford.edu, yixin.zhu@pku.edu.cn, guibas@stanford.edu" + }, + { + "type": "text", + "bbox": [ + 0.356, + 0.273, + 0.646, + 0.288 + ], + "angle": 0, + "content": "\\(^{1}\\) Institute for AI, Peking University, China" + }, + { + "type": "text", + "bbox": [ + 0.296, + 0.288, + 0.707, + 0.302 + ], + "angle": 0, + "content": "\\(^{2}\\) Department of Computer Science, Stanford University, USA" + }, + { + "type": "text", + "bbox": [ + 0.307, + 0.302, + 0.696, + 0.316 + ], + "angle": 0, + "content": "\\(^{3}\\) PKU-WUHAN Institute for Artificial Intelligence, China" + }, + { + "type": "image", + "bbox": [ + 0.224, + 0.337, + 0.785, + 0.439 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.443, + 0.788, + 0.527 + ], + "angle": 0, + "content": "Fig. 1: Overview. Left: Given two sets of features, \\( E^{M}, E^{N} \\), and \\( F^{M}, F^{N} \\), we compute the Laplacian eigenfunction basis with \\( E^{M}, E^{N} \\), and apply regularizations to the functional map optimization using \\( F^{M}, F^{N} \\). This method optimizes a mapping in the spectral domain derived from one feature set to achieve a consensus with the other set. Right: With a better understanding of the global image structure, our method produces smoother and more accurate correspondences in a zero-shot manner." + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.555, + 0.744, + 0.791 + ], + "angle": 0, + "content": "Abstract. Correspondences emerge from large-scale vision models trained for generative and discriminative tasks. This has been revealed and benchmarked by computing correspondence maps between pairs of images, using nearest neighbors on the feature grids. Existing work has attempted to improve the quality of these correspondence maps by carefully mixing features from different sources, such as by combining the features of different layers or networks. We point out that a better correspondence strategy is available, which directly imposes structure on the correspondence field: the functional map. Wielding this simple mathematical tool, we lift the correspondence problem from the pixel space to the function space and directly optimize for mappings that are globally coherent. We demonstrate that our technique yields correspondences that are not only smoother but also more accurate, with the possibility of better reflecting the knowledge embedded in the large-scale vision models that we are studying. Our approach sets a new state-of-the-art on various dense correspondence tasks. We also demonstrate our effectiveness in keypoint correspondence and affordance map transfer." + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.806, + 0.744, + 0.834 + ], + "angle": 0, + "content": "Keywords: Functional map \\(\\cdot\\) Zero shot image matching \\(\\cdot\\) Dense correspondence \\(\\cdot\\) Emergent feature property" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.375, + 0.161 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.182, + 0.788, + 0.333 + ], + "angle": 0, + "content": "Identifying image correspondence is a crucial task in mid-level computer vision. Recent advancements in large-scale vision models, trained for either generative [36] or discriminative [6,29] tasks, possess emerged capabilities for dense correspondences [1,13,43,55]. This learning is primarily facilitated by computing nearest neighbor matches between image patches with their feature similarities. Notably, the correspondences induced by these models can achieve comparable or even better performances compared to the methods explicitly designed for this purpose. However, a notable limitation arises: these models often struggle to retain the global structure of the correspondences. This can be attributed to the distortions and discontinuities in the nearest-neighbor search process." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.336, + 0.788, + 0.429 + ], + "angle": 0, + "content": "While contemporary methods [55] have attempted to mitigate this problem by integrating features from different layers and networks, this approach only indirectly confronts the fundamental issue—the lack of structure in the correspondence maps. Fundamentally, point-wise correspondences are inherently susceptible to noise. Therefore, imposing a global structure on the correspondence maps is crucial for attaining high-quality correspondences without supervision" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.43, + 0.788, + 0.567 + ], + "angle": 0, + "content": "In this work, we leverage functional maps [30] to tackle the above challenge. Originating from computer graphics, functional maps present a robust alternative to point-to-point correspondences [4,17,26]. They represent dense correspondences as linear mappings between function spaces, usually defined on 3D shapes. The key aspect of functional maps is their ability to capture deformations that align one manifold with another. Owing to their low-dimensional yet expressive nature, functional maps effectively incorporate global structures into the matching process. This approach provides a compelling solution to the challenges inherent in traditional point-wise correspondence methods." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.569, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Specifically, we improve zero-shot feature-based correspondence methods by transitioning from the pixel space to the function space, thereby enhancing the method's coherence and effectiveness. Traditional functional maps on manifolds rely on two geometric inputs: the Laplacian operator, which is crucial for computing the eigenfunction basis, and a local geometric descriptor, for the application of regularization losses. We adapt these components to the realm of images by employing visual features extracted from two distinct large vision models. Our approach diverges from traditional methods, which typically identify corresponding pixels between images through nearest neighbor search. Instead, we concentrate on optimizing a linear function map established on the eigenfunction basis defined by the first feature map, with the second feature map serving as a geometric regularizer. This process, notably unsupervised, marks a significant difference from conventional methods. Further augmenting our method's robustness, especially against occlusions, is the incorporation of a transformer module for tackling partial shape matching, as detailed in partial functional maps et al. [2]. Such integration of functional map concepts with feature-based methods in image analysis represents a cohesive and logical advancement in tackling the challenges of correspondence tasks." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.303, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.252 + ], + "angle": 0, + "content": "We evaluate our framework on dense correspondence across various base networks, demonstrating consistent enhancements in matching accuracy and other functional properties like smoothness compared to the traditional nearest neighbor search. We highlight the qualitative results of our approach on the challenging cases with significant shape variations, viewpoint changes, and occlusions. We further demonstrate our effectiveness on keypoint correspondences and object affordance map transfer, showcasing its versatility in diverse scenarios." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.253, + 0.788, + 0.419 + ], + "angle": 0, + "content": "In summary, our primary contribution is a novel zero-shot framework designed to derive correspondence maps from pre-trained features. Central to our approach is the concept of optimizing a functional map that establishes a relationship between the entire image contents, moving away from the conventional method of direct pixel-to-pixel correspondence searches. Our experimental results, evaluated on various standard datasets, demonstrate that our method produces correspondences that are not only smoother and more accurate but also exhibit greater global coherence compared to previous efforts. We believe that our techniques effectively uncover the underlying correspondence capabilities of the large-scale backbone networks. We hope that our work will serve as an inspiration for future research in general-purpose object correspondence." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.442, + 0.388, + 0.457 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.463, + 0.788, + 0.659 + ], + "angle": 0, + "content": "Emergent correspondence from vision models Deep image networks have demonstrated remarkable robustness to geometric transformations, such as rotation, scaling, and perspective changes, leading to the emergence of dense correspondences [9, 28, 32, 39, 50, 54]. These transformations, predominantly rigid in nature, have been a focal point in previous studies. The research by Amir et al. [1] revealed that features extracted from DINOv1 [6] not only act as effective dense visual descriptors but also naturally induce semantic correspondences without direct supervision. This capability is further amplified in its successor, DINOv2 [29]. Beyond discriminative models, recent explorations have shown that generative models, such as diffusion models, also unveil emergent dense correspondences within their latent features [13, 43, 55]. Intriguingly, Zhang et al. [55] discovered that combining features from DINOv2 [29] with those from Stable Diffusion [36] significantly enhances correspondence quality." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.66, + 0.788, + 0.721 + ], + "angle": 0, + "content": "Our study highlights a crucial gap: existing methods lack structural awareness when computing correspondences by nearest-neighbor queries of per-pixel features. Here, we propose representing the correspondence map within a functional space, offering a novel approach to this challenge." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.735, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Semantic correspondence Semantic correspondence [22] seeks to establish pixelwise matches across objects differing in poses, appearances, deformations, or even categories. Traditional approaches generally involve three stages [49]: feature extraction, cost volume construction, and displacement field [45-48] or parameterized transformation regression [15, 16, 33, 34, 40]. However, their reliance on smooth displacement fields or locally affine transformations hinders their ability to model complex object deformations or shape variations effectively." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.269 + ], + "angle": 0, + "content": "Recent developments, inspired by the classical congealing method [18], focus on aligning multiple objects within the same class using learning techniques like DINOv1 features [10, 27] or GAN-synthesized data [31]. Despite their strong assumptions about data rigidity, these studies suggest that leveraging features and information from diverse tasks can enhance the quality of dense image correspondences. In our work, we further demonstrate that a structure-aware fusion of features learned from multiple tasks can significantly improve the quality of correspondence maps." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.282, + 0.79, + 0.463 + ], + "angle": 0, + "content": "Functional maps Initially introduced by Ovsjanikov et al. [30] and further expanded by Aubry et al. [3], functional maps offer a method to represent shape correspondences as linear transformations between spectral embeddings. This is achieved using compact matrices based on eigenfunction basis. Enhancements in accuracy, efficiency, and robustness have been realized in subsequent studies [4, 14, 17, 26]. Moving away from traditional methods dependent on hand-crafted features [3, 42], recent developments have introduced various learning-based functional map frameworks. These utilize shape features learned via pairwise label supervision [21], geometric priors [11,37], or robust mesh features [5,8,19,41]. While traditionally employed for full-shape correspondence, functional maps have also been adapted to handle partial correspondences [2,35], thus aligning more closely with real-world scenarios." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.464, + 0.79, + 0.616 + ], + "angle": 0, + "content": "While functional maps are extensively explored for 3D shape representations like meshes and point clouds, their application to 2D images has been limited due to the ambiguous manifold structure of RGB-value representations [51, 52]. Previous attempts at applying these maps to super-pixel image representations and utilizing their eigenfunctions as a basis [51, 52] typically result in significant information loss. This is often due to the coarse nature of pre-segmentation in images and the resultant inconsistency in super-pixel representation. In our work, we address these challenges by using the entire image as input for a large vision model, ensuring a consistent initial representation and stable global structure during transformations by functional maps." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.638, + 0.331, + 0.654 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.67, + 0.374, + 0.684 + ], + "angle": 0, + "content": "3.1 Preliminaries" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.695, + 0.788, + 0.816 + ], + "angle": 0, + "content": "Functional map Originally introduced in Ovsjanikov et al. [30], the functional map is a method for representing dense correspondences in the function space. This approach is based on the concept of mapping between function spaces defined on manifolds. Specifically, given two manifolds \\(\\mathcal{M}\\) and \\(\\mathcal{N}\\), we consider the spaces \\(\\mathcal{F}(\\mathcal{M},\\mathbb{R})\\) and \\(\\mathcal{F}(\\mathcal{N},\\mathbb{R})\\), each comprising all real-valued scalar functions on these manifolds, denoted as \\(\\varphi^{\\mathcal{M}}:\\mathcal{M}\\to \\mathbb{R}\\) and \\(\\varphi^{\\mathcal{N}}:\\mathcal{N}\\to \\mathbb{R}\\), respectively. We can express a bijective mapping \\(T:\\mathcal{M}\\rightarrow \\mathcal{N}\\) as a linear mapping between these function spaces, as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.367, + 0.826, + 0.787, + 0.842 + ], + "angle": 0, + "content": "\\[\nT _ {F}: \\mathcal {F} (\\mathcal {M}, \\mathbb {R}) \\rightarrow \\mathcal {F} (\\mathcal {N}, \\mathbb {R}), \\quad f \\mapsto T _ {F} (f). \\tag {1}\n\\]" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.303, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.241, + 0.151, + 0.756, + 0.334 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.336, + 0.788, + 0.379 + ], + "angle": 0, + "content": "Fig. 2: Eigenfunctions of the image Laplacian. We visualize the eigenfunctions of the graph Laplacian operator corresponding to the first 5 smallest eigenvalues \\(\\lambda_1, \\dots, \\lambda_5\\) (low frequency) as well as \\(\\lambda_{10}, \\lambda_{20}, \\lambda_{50}\\) (high frequency)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.395, + 0.788, + 0.472 + ], + "angle": 0, + "content": "To compute these mappings effectively, we expand the function spaces \\(\\mathcal{F}(\\mathcal{M},\\mathbb{R})\\) and \\(\\mathcal{F}(\\mathcal{N},\\mathbb{R})\\) by introducing sets of basis functions, \\(\\Phi^{\\mathcal{M}} = \\{\\varphi_i^{\\mathcal{M}}\\}\\) and \\(\\Phi^{\\mathcal{N}} = \\{\\varphi_i^{\\mathcal{N}}\\}\\), for \\(\\mathcal{M}\\) and \\(\\mathcal{N}\\), respectively. Thus, any real-valued function \\(f\\in \\mathcal{F}(\\mathcal{M},\\mathbb{R})\\) can be represented as a linear combination of these basis functions: \\(f = \\sum_{i}a_{i}\\varphi_{i}^{\\mathcal{M}}\\). Applying the operator \\(T_{F}\\) to \\(f\\) leads to the equation:" + }, + { + "type": "equation", + "bbox": [ + 0.355, + 0.478, + 0.788, + 0.517 + ], + "angle": 0, + "content": "\\[\nT _ {F} (f) = T _ {F} \\left(\\sum_ {i} a _ {i} \\varphi_ {i} ^ {\\mathcal {M}}\\right) = \\sum_ {i} a _ {i} T _ {F} \\left(\\varphi_ {i} ^ {\\mathcal {M}}\\right). \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.523, + 0.787, + 0.556 + ], + "angle": 0, + "content": "Each transformed function \\( T_{F}(\\varphi_{i}^{\\mathcal{M}}) \\in \\mathcal{F}(\\mathcal{N},\\mathbb{R}) \\) can be further decomposed into a linear combination of \\( \\varphi_j^\\mathcal{N} \\). Hence, we have \\( T_{F}(\\varphi_{i}^{\\mathcal{M}}) = \\sum_{j}c_{ij}\\varphi_{j}^{\\mathcal{N}} \\), leading to:" + }, + { + "type": "equation", + "bbox": [ + 0.354, + 0.564, + 0.788, + 0.593 + ], + "angle": 0, + "content": "\\[\nT _ {F} (f) = \\sum_ {i} a _ {i} \\sum_ {j} c _ {i j} \\varphi_ {j} ^ {\\mathcal {N}} = \\sum_ {h} \\sum_ {i} a _ {i} c _ {i j} \\varphi_ {j} ^ {\\mathcal {N}}. \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.601, + 0.788, + 0.661 + ], + "angle": 0, + "content": "For simplicity, the function \\( f \\) is represented in a vector form with coefficients \\( \\mathbf{a} = (a_{1}, a_{2}, \\dots)^{t} \\). Consequently, the transformation \\( T_{F} \\) on \\( \\mathbf{a} \\) is given by \\( T_{F}(\\mathbf{a}) = \\mathbf{C}\\mathbf{a} \\), where \\( \\mathbf{C} \\) is a matrix with elements \\( c_{ij} \\), representing the \\( j \\)-th coefficient of \\( T_{F}(\\varphi_{i}^{\\mathcal{M}}) \\) in the basis \\( \\{\\varphi_{j}^{\\mathcal{N}}\\} \\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.662, + 0.788, + 0.722 + ], + "angle": 0, + "content": "To translate the functional map into point-to-point correspondences, we treat each point as a Dirac delta function in the function space. Specifically, this conversion from the functional to the point-wise map is executed via a nearest neighbor search between the rows of \\(\\mathbf{C}\\Phi^{\\mathcal{M}}\\) and \\(\\Phi^{\\mathcal{N}}\\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.734, + 0.788, + 0.809 + ], + "angle": 0, + "content": "Deep partial functional map The functional map framework, while adept at modeling perfect correspondence mappings between complete shapes [30], faces challenges when applied to real-world data that often have missing data and noise. This has led to the development of partial functional maps, as discussed in [2, 35]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.788, + 0.84 + ], + "angle": 0, + "content": "The primary challenge in adapting functional maps to partial shapes is the disruption of core assumptions, such as manifold completeness and bijective" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.192 + ], + "angle": 0, + "content": "mappings. Atta et al. [2] address this challenge by introducing a feature refinement network, denoted as \\( g_{\\mathcal{R}} \\), which enhances the robustness of partial functional maps against shape partiality." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.192, + 0.788, + 0.269 + ], + "angle": 0, + "content": "Consider \\(M\\) and \\(N\\) as discretizations of the partial shapes \\(\\mathcal{M}\\) and \\(\\mathcal{N}\\), respectively. We construct a bipartite graph \\((\\mathcal{V},\\mathcal{E})\\), with edges connecting every point \\(\\mathbf{x} \\in M\\) to every point \\(\\mathbf{y} \\in N\\). The refinement module inputs per-point features \\(F^{M}\\) and \\(F^{N}\\), and updates these features via message passing on the bipartite graph. This process employs an attention mechanism, formulated as" + }, + { + "type": "equation", + "bbox": [ + 0.376, + 0.277, + 0.788, + 0.309 + ], + "angle": 0, + "content": "\\[\nm _ {\\epsilon \\rightarrow i} = \\sum_ {j, (i, j) \\in \\mathcal {E}} \\operatorname {s o f t m a x} _ {j} \\left(q _ {i} ^ {T} k _ {j} / \\sqrt {d}\\right) v _ {j}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.316, + 0.57, + 0.332 + ], + "angle": 0, + "content": "and the final updated value of node \\(i\\) is given by" + }, + { + "type": "equation", + "bbox": [ + 0.344, + 0.342, + 0.786, + 0.358 + ], + "angle": 0, + "content": "\\[\nx _ {0} = x _ {0} + x _ {\\text {p o s}}, \\quad x _ {i + 1} = x _ {i} + \\operatorname {M L P} \\left( \\right.\\left[ \\right. x _ {i} \\left. \\right\\| m _ {\\epsilon \\rightarrow i} \\left. \\right]\\left. \\right), \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.366, + 0.788, + 0.412 + ], + "angle": 0, + "content": "where \\( x_{\\mathrm{pos}} \\) represents the positional embedding, \\( [\\cdot \\| \\cdot ] \\) denotes concatenation, and MLP is a multilayer perceptron with ReLU activations and instance normalization. The refined features on the shape pair are denoted as \\( g_{\\mathcal{R}}(F^M) \\) and \\( g_{\\mathcal{R}}(F^{N}) \\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.412, + 0.787, + 0.472 + ], + "angle": 0, + "content": "To understand this message passing process, consider a region \\(\\Omega\\) exclusive to shape \\(M\\) and absent in shape \\(N\\). Let \\(F_{\\Omega}\\) denote a feature assignment function restricted to \\(\\Omega\\). When projecting these features onto the function basis, the functional map equation becomes:" + }, + { + "type": "equation", + "bbox": [ + 0.414, + 0.48, + 0.787, + 0.498 + ], + "angle": 0, + "content": "\\[\n\\mathbf {C} \\varphi^ {M} F _ {\\Omega} (M) = \\varphi^ {N} F _ {\\Omega} (N). \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.506, + 0.788, + 0.568 + ], + "angle": 0, + "content": "This equation holds true if and only if \\( F_{\\Omega}(\\mathbf{x}) = 0 \\) implies \\( F_{\\Omega}(\\mathbf{y}) = 0 \\) for \\( \\mathbf{x} \\in M, \\mathbf{y} \\in N \\). Hence, effective communication between the regions on \\( M \\) and \\( N \\) is crucial, enabling feature synchronization over overlapping regions while diminishing the influence of features outside these overlaps." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.589, + 0.604, + 0.605 + ], + "angle": 0, + "content": "3.2 Feature Consensus with Functional Maps" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.614, + 0.788, + 0.675 + ], + "angle": 0, + "content": "An overview of our framework is depicted in Fig. 1. Given a pair of images \\(M\\) and \\(N\\), our setup includes two distinct pixel-wise feature extraction networks, yielding two sets of features: \\(E^{M}, E^{N}\\) and \\(F^{M}, F^{N}\\). For instance, \\(E^{M}\\) and \\(E^{N}\\) might be DINOv2 features, while \\(F^{M}\\) and \\(F^{N}\\) could be Stable Diffusion features." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.675, + 0.788, + 0.781 + ], + "angle": 0, + "content": "The primary objective is to derive a functional map \\(\\mathbf{C}\\) between the two function spaces \\(\\mathcal{F}(M,\\mathbb{R})\\) and \\(\\mathcal{F}(N,\\mathbb{R})\\). The core of our method involves using \\(E^{M}\\) and \\(E^{N}\\) to calculate the Laplacian eigenfunction basis and apply \\(F^{M}\\) and \\(F^{N}\\) for introducing regularizations in optimizing the functional map. In essence, our method optimizes the functional map derived from one set of features to achieve a \"consensus\" with the other set, providing a more comprehensive and robust mapping between the function spaces of the images." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.795, + 0.789, + 0.842 + ], + "angle": 0, + "content": "Image Laplacian from visual features For an image feature of dimensions \\((h, w)\\), where \\(h\\) is the height and \\(w\\) is the width, we view it as a grid graph comprising \\(h \\times w\\) nodes; each node is connected to its four adjacent neighbors. However, a" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.302, + 0.115, + 0.733, + 0.131 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.787, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.785, + 0.192 + ], + "angle": 0, + "content": "graph constructed naively would lack awareness of the image content, and its Laplacian eigenspaces would correspond to the conventional Fourier frequency space." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.193, + 0.787, + 0.238 + ], + "angle": 0, + "content": "Instead, we assign weights to the graph edges based on the first set of image features \\( E^{M} \\) and \\( E^{N} \\). For two adjacent patches \\( \\mathbf{x} \\) and \\( \\mathbf{y} \\) in image \\( M \\) (a similar definition applies for \\( N \\)), the weight of the edge between them is given by:" + }, + { + "type": "equation", + "bbox": [ + 0.397, + 0.247, + 0.788, + 0.284 + ], + "angle": 0, + "content": "\\[\n\\| e _ {\\mathbf {x y}} \\| = \\exp \\left(- \\frac {\\| E _ {\\mathbf {x}} ^ {M} - E _ {\\mathbf {y}} ^ {M} \\|}{\\sigma}\\right), \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.291, + 0.599, + 0.305 + ], + "angle": 0, + "content": "where \\(\\sigma\\) denotes the median of all the feature values." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.306, + 0.787, + 0.382 + ], + "angle": 0, + "content": "Next, we compute the graph Laplacian \\(\\varDelta_M\\) and utilize its eigenfunctions as the basis. In alignment with previous research, we adopt a reduced function space spanned by the first 200 eigenfunctions. To compute the Laplacian eigen decompositions, we employ the LOBPCG algorithm, known for its efficiency. Fig. 2 presents examples of these Laplacian eigenfunctions." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.395, + 0.788, + 0.498 + ], + "angle": 0, + "content": "Feature as function regularizer For the second set of features \\( F^M \\) and \\( F^N \\), we employ them as descriptor functions and impose a constraint on \\( \\mathbf{C} \\) such that \\( \\mathbf{C}F^M \\approx F^N \\). Given the incompleteness of shape correspondences in image pairs, due for example to occlusion within the object and by other objects, we utilize the attention-based feature refinement network \\( g_{\\mathcal{R}} \\) from deep partial functional maps [2]. This network refines the features, which are then projected onto the function basis:" + }, + { + "type": "equation", + "bbox": [ + 0.368, + 0.499, + 0.787, + 0.516 + ], + "angle": 0, + "content": "\\[\n\\tilde {F} ^ {M} = \\varphi^ {M} g _ {\\mathcal {R}} \\left(F ^ {M}\\right), \\quad \\tilde {F} ^ {N} = \\varphi^ {N} g _ {\\mathcal {R}} \\left(F ^ {N}\\right). \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.52, + 0.782, + 0.535 + ], + "angle": 0, + "content": "The descriptor-preserving loss applied to these refined features is formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.423, + 0.542, + 0.787, + 0.56 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {f e a t}} = \\left\\| \\mathbf {C} \\tilde {F} ^ {M} - \\tilde {F} ^ {N} \\right\\| _ {2}. \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.568, + 0.787, + 0.613 + ], + "angle": 0, + "content": "To enhance the regularity of the functional map, our optimization objective incorporates two additional regularization terms. First, we integrate a compactness regularization into the functional map matrix:" + }, + { + "type": "equation", + "bbox": [ + 0.413, + 0.622, + 0.787, + 0.647 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {d i a g}} = \\left(\\left| \\lambda_ {i} ^ {M} - \\lambda_ {j} ^ {N} \\right| c _ {i j}\\right) ^ {2}, \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.655, + 0.788, + 0.761 + ], + "angle": 0, + "content": "where \\(\\lambda_{i}^{M}\\) and \\(\\lambda_{j}^{N}\\) represent the \\(i\\)-th and \\(j\\)-th eigenvalues of the graph Laplacian matrices \\(\\Delta_{M}\\) and \\(\\Delta_{N}\\), respectively. For images with similar spectral distributions of eigenvalues, minimizing \\(\\mathcal{L}_{\\mathrm{diag}}\\) encourages a near-diagonal structure in \\(\\mathbf{C}\\). This regularization is based on the principle that eigenvalues' magnitudes are indicative of the frequencies of their corresponding eigenfunctions, and eigenfunctions with similar frequencies are more likely to correspond, as suggested by Huang et al. [14]." + }, + { + "type": "text", + "bbox": [ + 0.239, + 0.762, + 0.714, + 0.777 + ], + "angle": 0, + "content": "Next, we introduce a bijectivity constraint to the functional map:" + }, + { + "type": "equation", + "bbox": [ + 0.433, + 0.784, + 0.787, + 0.801 + ], + "angle": 0, + "content": "\\[\n\\mathbf {C} ^ {M \\rightarrow N} \\cdot \\mathbf {C} ^ {N \\rightarrow M} = \\mathbf {I}. \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.84 + ], + "angle": 0, + "content": "This can be interpreted as a special instance of the cycle-consistency regularization for image collections as in Wang et al. [51] when the number of images is two." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.207 + ], + "angle": 0, + "content": "To implement this constraint, in line with Wang et al. [51], we define two sets of estimizable latent bases: \\(\\mathbf{Z}^M = \\{Z_i^M\\}\\) and \\(\\mathbf{Z}^N = \\{Z_i^N\\}\\), corresponding to the function spaces \\(\\mathcal{F}(M,\\mathbb{R})\\) and \\(\\mathcal{F}(N,\\mathbb{R})\\) of both source and target images. The consistency loss is then defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.42, + 0.221, + 0.788, + 0.246 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {c o n s}} = \\left\\| \\mathbf {C Z} ^ {M} - \\mathbf {Z} ^ {N} \\right\\| _ {2}. \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.257, + 0.79, + 0.306 + ], + "angle": 0, + "content": "To prevent degenerate solutions where \\(\\mathbf{Z}^M\\) and \\(\\mathbf{Z}^N\\) could be trivially zero, we introduce an additional constraint requiring both \\(\\mathbf{Z}^M\\) and \\(\\mathbf{Z}^N\\) to satisfy \\(\\mathbf{Z}^t\\mathbf{Z} = \\mathbf{I}\\). Integrating all these components, our final optimization objective is:" + }, + { + "type": "equation", + "bbox": [ + 0.373, + 0.32, + 0.786, + 0.353 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\operatorname {a r g m i n} _ {\\mathbf {C}} \\mathcal {L} _ {\\text {f e a t}} + \\lambda_ {\\text {d i a g}} \\mathcal {L} _ {\\text {d i a g}} + \\lambda_ {\\text {c o n s}} \\mathcal {L} _ {\\text {c o n s}}, \\tag {13} \\\\ s. t. \\quad (\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {Z} ^ {M} = \\mathbf {I}, (\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {Z} ^ {N} = \\mathbf {I}. \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.374, + 0.789, + 0.42 + ], + "angle": 0, + "content": "Optimization We jointly optimize the weights of the image feature refinement network \\( g_{\\mathcal{R}} \\), the functional map \\( \\mathbf{C} \\), and the latent basis \\( \\mathbf{Z}^{M} \\) and \\( \\mathbf{Z}^{N} \\) for the input image pair. The full loss function is formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.33, + 0.435, + 0.786, + 0.501 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathcal {L} = \\mathcal {L} _ {\\mathrm {f e a t}} + \\lambda_ {\\mathrm {d i a g}} \\mathcal {L} _ {\\mathrm {d i a g}} + \\lambda_ {\\mathrm {c o n s}} \\mathcal {L} _ {\\mathrm {c o n s}} \\\\ + \\lambda_ {Z} \\left(\\operatorname {t r} \\left((\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {W} \\mathbf {Z} ^ {M}\\right) + \\operatorname {t r} \\left((\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {W} \\mathbf {Z} ^ {N}\\right)\\right) \\tag {14} \\\\ + \\lambda_ {\\mathrm {r e g}} \\left(\\left\\| (\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {Z} ^ {M} - \\mathbf {I} \\right\\| _ {2} + \\left\\| (\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {Z} ^ {N} - \\mathbf {I} \\right\\| _ {2}\\right), \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.513, + 0.788, + 0.546 + ], + "angle": 0, + "content": "where \\(\\mathbf{W} = \\mathbf{I} + \\mathbf{C}^t\\mathbf{C}\\). The terms \\(\\operatorname{tr}(\\mathbf{Z}^t\\mathbf{W}\\mathbf{Z})\\) are variations of Eq. (13) with \\(\\mathbf{Z}^M\\) and \\(\\mathbf{Z}^N\\) as the primary variables rather than \\(\\mathbf{C}\\), as discussed in Wang et al. [51]." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.575, + 0.376, + 0.592 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.607, + 0.788, + 0.7 + ], + "angle": 0, + "content": "Dataset We evaluate our method primarily on the TSS dataset [44], comprising 400 image pairs from three subsets: FG3DCAR [20], JODS [38], and PASCAL [12], all of which include dense correspondence annotations. Additionally, we perform evaluations on the SPair-71k dataset [24], which features sparse annotations of keypoint correspondences across 18 categories. For this dataset, we sample 20 pairs from each category for our analysis, following the prior work [55]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.842 + ], + "angle": 0, + "content": "Baselines Our comparison primarily focuses on emergent correspondences from various visual models and feature fusion techniques. We utilize feature extraction networks such as DINOv1 (ViT-S/8), DINOv2 (ViT-S/14 and ViT-B/14), and Stable Diffusion, which are prevalent and extensively researched in a wide range of visual perception tasks. In terms of feature fusion, we benchmark against the feature concatenation approach proposed by Zhang et al. [55], testing different combinations of features. Additionally, we list other methods designed for image correspondence tasks that involve stronger supervision or task-specific designs." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.302, + 0.115, + 0.731, + 0.131 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.201 + ], + "angle": 0, + "content": "Table 1: Results for dense correspondences on TSS [44]. The baselines are classified into three categories based on their training setups: supervised, unsupervised with task-specific designs, and zero-shot methods without task- or dataset-specific designs. * indicates backbones fine-tuned on this dataset." + }, + { + "type": "table", + "bbox": [ + 0.217, + 0.205, + 0.784, + 0.432 + ], + "angle": 0, + "content": "
SettingMethodFG3DCarJODSPascalAvg.
SupervisedSCOT [23]95.381.357.778.1
CATs* [7]92.178.964.278.4
PWarpC-CATs* [49]95.585.085.588.7
Unsupervised task-specificCNNGeo [33]90.176.456.374.4
PARN [15]89.575.971.278.8
GLU-Net [46]93.273.371.179.2
Semantic-GLU-Net [48]95.382.278.285.2
Unsupervised zero-shotDINOv1-ViT-S/8 [1]68.744.736.752.7
DINOv2-ViT-B81.268.451.569.4
Stable Diffusion (SD)92.162.648.472.5
Concat. DINOv2 + SD [55]92.973.859.678.7
FMap DINOv2(basis) + DINOv2(loss)83.569.252.771.0
FMap SD(basis) + SD(loss)80.063.451.567.8
FMap DINOv2(basis) + SD(loss) (ours)84.870.453.572.2
FMap DINOv2(loss) + SD(basis) (ours)93.174.059.978.9
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.452, + 0.79, + 0.603 + ], + "angle": 0, + "content": "Evaluation metrics For both dense and sparse correspondences, we adopt the Percentage of Correct Keypoints (PCK) metric [53] with a threshold of \\(\\kappa \\cdot \\max(h, w)\\), where \\(\\kappa\\) is a positive integer, and \\((h, w)\\) represents the image dimensions in the TSS dataset or the instance bounding-box dimensions in the SPair-71k dataset. Additionally, for dense correspondences on the TSS dataset, we assess spatial coherence using a smoothness metric [55]. This involves extracting a semantic flow (i.e., a 2D motion vector field from the source to the target image) and computing its first-order difference. In the case of sparse correspondences on the Spair-71k dataset, we further calculate the Mean Squared Error (MSE) on the keypoints to quantify mapping distortions." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.629, + 0.452, + 0.645 + ], + "angle": 0, + "content": "4.1 Dense Correspondence" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.658, + 0.788, + 0.748 + ], + "angle": 0, + "content": "Table 1 presents the results of dense correspondences on the TSS dataset. Following [55], we majorly compare to other zero-shot unsupervised methods, among which we achieve the best performances. Specifically, we outperform Zhang et al. [55] with the same pair of features by utilizing the features in a more structure-aware manner. We also list as references the performances of fully supervised methods and unsupervised methods with task-specific training." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.788, + 0.841 + ], + "angle": 0, + "content": "We also evaluate an ablated version of our framework by computing the basis functions and losses using the same set of features (the third and fourth rows from the last), which give significantly worse results compared to our full model. On the other side, it can still give better results than directly using one feature with nearest neighbor queries (for example, FMap DINOv2(basis) + DINOv2(loss) versus DINOv2-ViT-B/14). This shows that structure-awareness" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "image", + "bbox": [ + 0.248, + 0.148, + 0.75, + 0.379 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.382, + 0.788, + 0.478 + ], + "angle": 0, + "content": "Fig. 3: Dense correspondences on SPair-71k [24] Image Pairs. Each example displays pixel-wise mappings from source to target images in rainbow colors (second column for source coordinates, fourth and fifth columns for computed target coordinates) and color transfers (last two columns). Specifically, we demonstrate the challenging examples including significant viewpoint changes (first and second row), shape variations (first and third row), and occlusions (third row). Our framework achieves more consistent mappings with its global structure-awareness." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.492, + 0.785, + 0.52 + ], + "angle": 0, + "content": "can naturally lead to better correspondences even without introducing any additional information." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.522, + 0.788, + 0.643 + ], + "angle": 0, + "content": "Fig. 3 shows the qualitative results of dense correspondences computed with the DINOv2-ViT-B/14 and Stable Diffusion networks. We compare side-by-side the feature fusion results using pre-normalized concatenation [55] and our method. In all these examples, our framework provides smoother and more consistent mappings with its global structure-awareness. Specifically, we highlight two challenging examples: the airplanes in the second row with large camera-view changes, and the birds in the third row with large shape variations as well as occlusions. We also visualize the matrices for the linear functional maps in Fig. 6." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.659, + 0.788, + 0.735 + ], + "angle": 0, + "content": "Feature fusion with different networks Tab. 2 presents the accuracy and smoothness of correspondences derived from features of various network backbones. When compared to using individual features or their concatenation [55], our functional-map-based framework demonstrates superior results in both metrics across all tested configurations." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.785, + 0.81 + ], + "angle": 0, + "content": "Feature fusion with different layers Tab. 3 presents the results of fusing features from different layers within the same network. Our experiments involve layers 9 and 11 of DINOv2-ViT-S/14 and DINOv2-ViT-B/14. In all tested setups, our framework demonstrates superior performance compared to baseline methods." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.84 + ], + "angle": 0, + "content": "Additionally, a comparative analysis was performed on the choice of layers for DINOv2-ViT-B/14, specifically by fusing the features of layer 11 with those of" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.302, + 0.115, + 0.731, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "table_caption", + "bbox": [ + 0.298, + 0.145, + 0.704, + 0.159 + ], + "angle": 0, + "content": "Table 2: Fusing the features from different networks." + }, + { + "type": "table", + "bbox": [ + 0.236, + 0.163, + 0.769, + 0.351 + ], + "angle": 0, + "content": "
MethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv1-ViT-S/8raw53.976.846.112.90
DINOv2-ViT-S/14raw69.685.030.87.98
DINOv2-ViT-B/14raw69.487.830.910.46
Stable Diffusion (SD)raw72.583.837.56.41
DINOv1-ViT-S/8Concat. [55]69.988.131.010.33
+ DINOv2-ViT-B/14FMap (ours)72.290.327.77.95
DINOv2-ViT-S/14 + SDConcat. [55]78.189.927.56.58
FMap (ours)71.590.026.36.47
DINOv2-ViT-B/14 + SDConcat. [55]78.790.726.46.81
FMap (ours)78.991.126.15.74
" + }, + { + "type": "table_caption", + "bbox": [ + 0.231, + 0.364, + 0.77, + 0.379 + ], + "angle": 0, + "content": "Table 3: Fusing the features from different layers of the same network." + }, + { + "type": "table", + "bbox": [ + 0.257, + 0.383, + 0.747, + 0.529 + ], + "angle": 0, + "content": "
BackboneMethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv2-ViT-S/14Layer967.284.836.59.64
Layer1170.888.131.09.25
Concat. [55]70.588.131.09.25
FMap (ours)70.889.129.16.60
DINOv2-ViT-B/14Layer957.285.434.510.66
Layer1169.487.830.910.46
Concat. [55]70.087.930.910.24
FMap (ours)70.689.825.98.27
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.547, + 0.788, + 0.653 + ], + "angle": 0, + "content": "layers 8, 9, 10, and layer 11 tokens. The results, as depicted in Tab. 4, indicate that our functional map approach consistently surpasses both raw and concatenated features across all layer combinations. We also observed that greater feature distinction occurs when the two layers are more distant from each other. Our framework effectively leverages this distinction, extracting better correspondences by integrating their information. As shown in Tab. 4, optimal performance in EPE is achieved using features from layers 8 and 11." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.677, + 0.377, + 0.691 + ], + "angle": 0, + "content": "4.2 More Results" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.704, + 0.788, + 0.795 + ], + "angle": 0, + "content": "Keypoint correspondence Tab. 5 presents the results for sparse keypoint correspondences on SPair-71k [24]. Compared to feature concatenation [55], our method demonstrates comparable or higher PCK (with different thresholds) and exhibits lower MSE errors. Note that the selected keypoints are extremely sparse on the images, which could potentially introduce sampling biases compared to evaluations of dense correspondences." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.796, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Fig. 4 showcases qualitative keypoint matching results. Our method is compared side-by-side with results obtained using feature concatenation, where our approach consistently demonstrates robustness in these challenging scenarios" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.187 + ], + "angle": 0, + "content": "Table 4: Results on different layer choices for feature fusion. This experiment involves DINOv2-ViT-B/14, wherein its layer 11 features (values) are fused with layers 8, 9, 10, and layer 11 tokens, respectively." + }, + { + "type": "table", + "bbox": [ + 0.237, + 0.19, + 0.763, + 0.378 + ], + "angle": 0, + "content": "
MethodLayer 8Layer 9Layer 10Layer 11 token
EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓
Raw [1]59.116.1056.816.0656.815.4053.313.20
Concat. [55]53.514.8055.413.9056.716.7055.316.10
FMap (ours)41.811.9545.29.5241.912.4345.310.65
Concat.
FMap (ours)
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.214, + 0.379, + 0.785, + 0.41 + ], + "angle": 0, + "content": "(a) Image pairs with similar geometric properties. (a) The baseline method incorrectly maps (a) the right ear of the horse to the left ear, (b) the right ear of the cow to the left ear, and (c) a point corresponding to the front feet of the horse to the hind feet." + }, + { + "type": "image", + "bbox": [ + 0.251, + 0.416, + 0.761, + 0.527 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.214, + 0.528, + 0.785, + 0.56 + ], + "angle": 0, + "content": "(b) Image pairs with significant differences in shapes and viewpoints. The baseline method incorrectly maps (a) all points on the pot to the plant, (b) a point on the child's ear to the woman's cheek, and (c) a point at the seat corner to another chair's armrest." + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.567, + 0.788, + 0.595 + ], + "angle": 0, + "content": "Fig. 4: Sparse keypoint correspondences on SPair-71k [24] image pairs. Correct matches are connected with blue lines and incorrect matches with red lines." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.614, + 0.788, + 0.689 + ], + "angle": 0, + "content": "and effectively captures the geometric properties of the features. Fig. 4a further illustrates the effectiveness of our method in scenarios where the target image contains many similar points, like the legs of a horse. In contrast, the baseline struggles to capture the global structure, often leading to mappings of similar but incorrect points." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Affordance transfer We further showcase an application of our method in transferring tool affordances between images from the RGB-D Part Affordance Dataset [25]. This dataset features different types of affordances annotated on each object, represented as heat maps. Fig. 5 illustrates our results in transferring these affordance heat maps. Such distributional functions across pixels pose a challenge to raw pixel-wise maps due to the potential distortion of their overall structure during interpolation. However, these functions can be naturally modeled with functional maps, as our approach demonstrates." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.302, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "table_caption", + "bbox": [ + 0.215, + 0.145, + 0.788, + 0.173 + ], + "angle": 0, + "content": "Table 5: Results for sparse keypoint correspondences on SPair-7k [24]. All results in this experiment are with the DINOv2-ViT-B/14 backbone." + }, + { + "type": "table", + "bbox": [ + 0.336, + 0.177, + 0.67, + 0.263 + ], + "angle": 0, + "content": "
MethodPCK@0.1↑PCK@0.2↑MSE↓
DINOv252.368.0105.0
Stable Diffusion51.264.1120.5
Concat. [55]57.272.297.2
FMap (ours)55.372.688.0
" + }, + { + "type": "image", + "bbox": [ + 0.244, + 0.273, + 0.357, + 0.428 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.375, + 0.273, + 0.49, + 0.428 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.509, + 0.273, + 0.622, + 0.427 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.642, + 0.273, + 0.756, + 0.428 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.432, + 0.788, + 0.515 + ], + "angle": 0, + "content": "Fig. 5: Transferring tool affordances represented as heat maps. We treat affordance heat maps as functions defined on the source and the target image. By optimizing the functional map between the source and the target, we manage to transfer the function after applying the functional map to it directly following Eq. (1). We employ features from DINOV2-ViT-B/14 and Stable Diffusion to compute the functional maps in this experiment." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.537, + 0.788, + 0.644 + ], + "angle": 0, + "content": "Ablation Studies In addition to the feature ablations shown in Tab. 1 and discussed in Sec. 4.1, we also present an ablation on the regularization terms for the functional map optimization. Tab. 6 shows the results optimized with different regularization losses. The diagonality and consistency regularizations greatly improve the accuracy of the mapping. Fig. 6 visualizes the functional map matrices with and without the regularizations. The near-diagonal mappings are preferred because they match the function basis with similar frequencies." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.672, + 0.364, + 0.687 + ], + "angle": 0, + "content": "5 Discussions" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.703, + 0.788, + 0.794 + ], + "angle": 0, + "content": "As shown in Sec. 4.1, our functional map framework effectively integrates features from different network layers. This integration, particularly from just two distinct layers, outperforms the conventional approach of using same-layer features or naively concatenating different features. This finding opens up promising avenues for enhancing the generalization capabilities of large-scale vision models without additional fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.795, + 0.788, + 0.842 + ], + "angle": 0, + "content": "Moreover, the interpretability of learned features in the functional map framework is crucial, particularly in domains like medical imaging or autonomous systems. Our approach, as shown in Fig. 3, enables impressive image editing" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "image", + "bbox": [ + 0.249, + 0.149, + 0.756, + 0.309 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.311, + 0.788, + 0.354 + ], + "angle": 0, + "content": "Fig. 6: Functional map matrices with and without regularization losses. Enforcing the compactness loss (Eq. (10)) centers the non-zero matrix entries around the diagonals to match the function basis of similar frequencies." + }, + { + "type": "table_caption", + "bbox": [ + 0.215, + 0.36, + 0.79, + 0.389 + ], + "angle": 0, + "content": "Table 6: Ablation on the loss terms. All results in the experiment are with DINOv2-ViT-B/14 and Stable Diffusion on the SPair-71k dataset." + }, + { + "type": "table", + "bbox": [ + 0.285, + 0.392, + 0.716, + 0.478 + ], + "angle": 0, + "content": "
LossPCK@0.1↑PCK@0.2↑MSE↓
Lfeat (no regularization)44.665.595.3
Lfeat + Ldiag52.969.597.9
Lfeat + Lcons52.869.7100.3
Lfeat + Ldiag + Lcons (full loss)55.372.688.0
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.492, + 0.789, + 0.525 + ], + "angle": 0, + "content": "outcomes without generative models. This leads to the intriguing possibility of combining our method with generative models to enhance image quality." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.537, + 0.371, + 0.553 + ], + "angle": 0, + "content": "6 Conclusions" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.556, + 0.789, + 0.721 + ], + "angle": 0, + "content": "The emergence of correspondences from large-scale vision models not explicitly trained for this task is noteworthy. While nearest-neighbor analyses provide a direct exploration, they overlook the structure inherent not only in the image contents but also in the model features. Our work leverages this embedded structure via functional maps, aiming to generate point-wise accurate and globally coherent correspondences. Despite its simplicity, it significantly enhances the matching results with zero-shot inference on image pairs without additional supervision or task-specific training. While the core concepts of our approach are rooted in 3D shape correspondence literature from graphics [30], our implementation using deep feature-based functional maps bridges this area with cutting-edge vision research." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.734, + 0.789, + 0.842 + ], + "angle": 0, + "content": "Limitations and future work The structure-awareness of functional maps relies on the manifold assumption of its underlying domain, making our current framework more suitable for object-centric images than complex scenes with diverse compositionalities. Examples of the latter include matching a horse to a herd of horses or matching two indoor scenes. However, this issue might be addressed using additional image segmentation techniques that decompose the image into objects and parts, or by exploring matches between quotient spaces." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.303, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.145, + 0.323, + 0.16 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.174, + 0.785, + 0.201 + ], + "angle": 0, + "content": "1. Amir, S., Gandelsman, Y., Bagon, S., Dekel, T.: Deep vit features as dense visual descriptors. arXiv preprint arXiv:2112.05814 2(3), 4 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.203, + 0.775, + 0.215 + ], + "angle": 0, + "content": "2. Attaiki, S., Pai, G., Ovsjanikov, M.: Dpfm: Deep partial functional maps (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.216, + 0.785, + 0.242 + ], + "angle": 0, + "content": "3. Aubry, M., Schlickewei, U., Cremers, D.: The wave kernel signature: A quantum mechanical approach to shape analysis. In: ICCV Workshops (2011)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.243, + 0.785, + 0.269 + ], + "angle": 0, + "content": "4. Burghard, O., Dieckmann, A., Klein, R.: Embedding shapes with green's functions for global shape matching. Computers & Graphics 68, 1-10 (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.27, + 0.785, + 0.283 + ], + "angle": 0, + "content": "5. Cao, D., Bernard, F.: Unsupervised deep multi-shape matching. In: ECCV (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.284, + 0.787, + 0.31 + ], + "angle": 0, + "content": "6. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: ICCV (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.311, + 0.785, + 0.35 + ], + "angle": 0, + "content": "7. Cho, S., Hong, S., Jeon, S., Lee, Y., Sohn, K., Kim, S.: Cats: Cost aggregation transformers for visual correspondence. Advances in Neural Information Processing Systems 34, 9011-9023 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.351, + 0.785, + 0.378 + ], + "angle": 0, + "content": "8. Donati, N., Corman, E., Ovsjanikov, M.: Deep orientation-aware functional maps: Tackling symmetry issues in shape matching. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.379, + 0.785, + 0.418 + ], + "angle": 0, + "content": "9. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T.: D2-net: A trainable cnn for joint description and detection of local features. In: CVPR (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.42, + 0.785, + 0.459 + ], + "angle": 0, + "content": "10. Gupta, K., Jampani, V., Esteves, C., Shrivastava, A., Makadia, A., Snavely, N., Kar, A.: ASIC: Aligning sparse in-the-wild image collections. arXiv preprint arXiv:2303.16201 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.46, + 0.785, + 0.487 + ], + "angle": 0, + "content": "1. Halimi, O., Litany, O., Rodola, E., Bronstein, A.M., Kimmel, R.: Unsupervised learning of dense shape correspondence. In: CVPR (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.488, + 0.785, + 0.513 + ], + "angle": 0, + "content": "2. Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.515, + 0.785, + 0.554 + ], + "angle": 0, + "content": "3. Hedlin, E., Sharma, G., Mahajan, S., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: Unsupervised semantic correspondence using stable diffusion. arXiv preprint arXiv:2305.15581 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.556, + 0.785, + 0.582 + ], + "angle": 0, + "content": "4. Huang, Q., Wang, F., Guibas, L.: Functional map networks for analyzing and exploring large shape collections. ACM TOG 33(4), 1-11 (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.583, + 0.785, + 0.609 + ], + "angle": 0, + "content": "5. Jeon, S., Kim, S., Min, D., Sohn, K.: Parn: Pyramidal affine regression networks for dense semantic correspondence. In: ECCV (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.61, + 0.785, + 0.636 + ], + "angle": 0, + "content": "6. Kim, S., Lin, S., Jeon, S.R., Min, D., Sohn, K.: Recurrent transformer networks for semantic correspondence (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.637, + 0.785, + 0.663 + ], + "angle": 0, + "content": "7. Kovnatsky, A., Bronstein, M.M., Bronstein, A.M., Glashoff, K., Kimmel, R.: Coupled quasi-harmonic bases. In: Comput. Graph. Forum (2013)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.664, + 0.785, + 0.69 + ], + "angle": 0, + "content": "8. Learned-Miller, E.G.: Data driven image models through continuous joint alignment IEEE TPAMI 28(2), 236-250 (2005)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.691, + 0.785, + 0.718 + ], + "angle": 0, + "content": "9. Li, L., Donati, N., Ovsjanikov, M.: Learning multi-resolution functional maps with spectral attention for robust shape matching (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.719, + 0.785, + 0.744 + ], + "angle": 0, + "content": "20. Lin, Y.L., Morariu, V.I., Hsu, W., Davis, L.S.: Jointly optimizing 3d model fitting and fine-grained classification. In: ECCV (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.745, + 0.785, + 0.772 + ], + "angle": 0, + "content": "21. Litany, O., Remez, T., Rodola, E., Bronstein, A., Bronstein, M.: Deep functional maps: Structured prediction for dense shape correspondence. In: ICCV (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.773, + 0.785, + 0.799 + ], + "angle": 0, + "content": "22. Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. IEEE TPAMI 33(5), 978-994 (2010)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.8, + 0.785, + 0.84 + ], + "angle": 0, + "content": "23. Liu, Y., Zhu, L., Yamada, M., Yang, Y.: Semantic correspondence as an optimal transport problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4463-4472 (2020)" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.174, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.355, + 0.129 + ], + "angle": 0, + "content": "Cheng et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.148, + 0.785, + 0.175 + ], + "angle": 0, + "content": "24. Min, J., Lee, J., Ponce, J., Cho, M.: Spair-71k: A large-scale benchmark for semantic correspondence. arXiv preprint arXiv:1908.10543 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.177, + 0.785, + 0.203 + ], + "angle": 0, + "content": "25. Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features (2015)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.205, + 0.785, + 0.231 + ], + "angle": 0, + "content": "26. Nogneng, D., Ovsjanikov, M.: Informative descriptor preservation via commutativity for shape matching. In: Comput. Graph. Forum (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.233, + 0.785, + 0.259 + ], + "angle": 0, + "content": "27. Ofri-Amar, D., Geyer, M., Kasten, Y., Dekel, T.: Neural congealing: Aligning images to a joint semantic atlas. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.261, + 0.785, + 0.285 + ], + "angle": 0, + "content": "28. Ono, Y., Trulls, E., Fua, P., Yi, K.M.: Lf-net: Learning local features from images (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.287, + 0.785, + 0.327 + ], + "angle": 0, + "content": "29. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.329, + 0.785, + 0.369 + ], + "angle": 0, + "content": "30. Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional maps: a flexible representation of maps between shapes. ACM TOG 31(4), 1-11 (2012)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.371, + 0.785, + 0.397 + ], + "angle": 0, + "content": "31. Peebles, W., Zhu, J.Y., Zhang, R., Torralba, A., Efros, A.A., Shechtman, E.: Gan-supervised dense visual alignment. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.399, + 0.785, + 0.424 + ], + "angle": 0, + "content": "32. Revaud, J., De Souza, C., Humenberger, M., Weinzaepfel, P.: R2d2: Reliable and repeatable detector and descriptor (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.426, + 0.785, + 0.452 + ], + "angle": 0, + "content": "33. Rocco, I., Arandjelovic, R., Sivic, J.: Convolutional neural network architecture for geometric matching. In: CVPR (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.454, + 0.785, + 0.479 + ], + "angle": 0, + "content": "34. Rocco, I., Arandjelovic, R., Sivic, J.: End-to-end weakly-supervised semantic alignment. In: CVPR (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.481, + 0.785, + 0.508 + ], + "angle": 0, + "content": "35. Rodola, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial functional correspondence. In: Comput. Graph. Forum (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.51, + 0.785, + 0.536 + ], + "angle": 0, + "content": "36. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.537, + 0.785, + 0.563 + ], + "angle": 0, + "content": "37. Roufosse, J.M., Sharma, A., Ovsjanikov, M.: Unsupervised deep learning for structured shape matching. In: ICCV (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.565, + 0.785, + 0.591 + ], + "angle": 0, + "content": "38. Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: CVPR (2013)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.593, + 0.785, + 0.619 + ], + "angle": 0, + "content": "39. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: Learning feature matching with graph neural networks. In: CVPR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.621, + 0.785, + 0.646 + ], + "angle": 0, + "content": "40. Seo, P.H., Lee, J., Jung, D., Han, B., Cho, M.: Attentive semantic alignment with offset-aware correlation kernels. In: ECCV (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.648, + 0.785, + 0.674 + ], + "angle": 0, + "content": "41. Sharp, N., Attaiki, S., Crane, K., Ovsjanikov, M.: Diffusionnet: Discretization agnostic learning on surfaces. ACM TOG 41(3), 1-16 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.676, + 0.785, + 0.702 + ], + "angle": 0, + "content": "42. Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Comput. Graph. Forum (2009)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.704, + 0.785, + 0.73 + ], + "angle": 0, + "content": "43. Tang, L., Jia, M., Wang, Q., Phoo, C.P., Hariharan, B.: Emergent correspondence from image diffusion. arXiv preprint arXiv:2306.03881 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.731, + 0.785, + 0.757 + ], + "angle": 0, + "content": "44. Taniai, T., Sinha, S.N., Sato, Y.: Joint recovery of dense correspondence and cosegmentation in two images. In: CVPR (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.759, + 0.785, + 0.785 + ], + "angle": 0, + "content": "45. Truong, P., Danelljan, M., Gool, L.V., Timofte, R.: Gocor: Bringing globally optimized correspondence volumes into your neural network (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.787, + 0.785, + 0.813 + ], + "angle": 0, + "content": "46. Truong, P., Danelljan, M., Timofte, R.: Glu-net: Global-local universal network for dense flow and correspondences. In: CVPR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.814, + 0.785, + 0.84 + ], + "angle": 0, + "content": "47. Truong, P., Danelljan, M., Van Gool, L., Timofte, R.: Learning accurate dense correspondences and when to trust them. In: CVPR (2021)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.148, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.303, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "48. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Warp consistency for unsupervised learning of dense correspondences. In: ICCV (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.175, + 0.785, + 0.203 + ], + "angle": 0, + "content": "49. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Probabilistic warp consistency for weakly-supervised semantic correspondences. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.203, + 0.785, + 0.231 + ], + "angle": 0, + "content": "50. Tyszkiiewicz, M., Fua, P., Trulls, E.: Disk: Learning local features with policy gradient (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.231, + 0.785, + 0.259 + ], + "angle": 0, + "content": "51. Wang, F., Huang, Q., Guibas, L.J.: Image co-segmentation via consistent functional maps. In: ICCV (2013)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.259, + 0.785, + 0.286 + ], + "angle": 0, + "content": "52. Wang, F., Huang, Q., Ovsjanikov, M., Guibas, L.J.: Unsupervised multi-class joint image segmentation. In: CVPR (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.286, + 0.785, + 0.314 + ], + "angle": 0, + "content": "53. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE TPAMI 35(12), 2878-2890 (2012)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.314, + 0.785, + 0.342 + ], + "angle": 0, + "content": "54. Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: Lift: Learned invariant feature transform In: ECCV (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.342, + 0.785, + 0.384 + ], + "angle": 0, + "content": "55. Zhang, J., Herrmann, C., Hur, J., Cabrera, L.P., Jampani, V., Sun, D., Yang, M.H.: A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. arXiv preprint arXiv:2305.15347 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.384 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_origin.pdf b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..79fa228b71d9dde592875020a1df921472110195 --- /dev/null +++ b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/44f0e082-68c6-4e0a-9ef3-4d4f7bee11af_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f8decec7a0bb1fbdcd73ed405f6417760ee266dafad71ec82ef5bc522da2969 +size 2281397 diff --git a/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/full.md b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2ae31b0576b0a2d3abf9a74a521d02cc5d60f676 --- /dev/null +++ b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/full.md @@ -0,0 +1,328 @@ +# Zero-Shot Image Feature Consensus with Deep Functional Maps + +Xinle Cheng $^{1}$ , Congyue Deng $^{2}$ , Adam W. Harley $^{2}$ , Yixin Zhu $^{1,3}$ , Leonidas Guibas $^{2}$ + +congyue@stanford.edu, yixin.zhu@pku.edu.cn, guibas@stanford.edu + +$^{1}$ Institute for AI, Peking University, China + +$^{2}$ Department of Computer Science, Stanford University, USA + +$^{3}$ PKU-WUHAN Institute for Artificial Intelligence, China + +![](images/336204f585cdae64e56576c1f87b995ddb44168ce5fb70f9da29caea739d186f.jpg) +Fig. 1: Overview. Left: Given two sets of features, $E^{M}, E^{N}$ , and $F^{M}, F^{N}$ , we compute the Laplacian eigenfunction basis with $E^{M}, E^{N}$ , and apply regularizations to the functional map optimization using $F^{M}, F^{N}$ . This method optimizes a mapping in the spectral domain derived from one feature set to achieve a consensus with the other set. Right: With a better understanding of the global image structure, our method produces smoother and more accurate correspondences in a zero-shot manner. + +Abstract. Correspondences emerge from large-scale vision models trained for generative and discriminative tasks. This has been revealed and benchmarked by computing correspondence maps between pairs of images, using nearest neighbors on the feature grids. Existing work has attempted to improve the quality of these correspondence maps by carefully mixing features from different sources, such as by combining the features of different layers or networks. We point out that a better correspondence strategy is available, which directly imposes structure on the correspondence field: the functional map. Wielding this simple mathematical tool, we lift the correspondence problem from the pixel space to the function space and directly optimize for mappings that are globally coherent. We demonstrate that our technique yields correspondences that are not only smoother but also more accurate, with the possibility of better reflecting the knowledge embedded in the large-scale vision models that we are studying. Our approach sets a new state-of-the-art on various dense correspondence tasks. We also demonstrate our effectiveness in keypoint correspondence and affordance map transfer. + +Keywords: Functional map $\cdot$ Zero shot image matching $\cdot$ Dense correspondence $\cdot$ Emergent feature property + +# 1 Introduction + +Identifying image correspondence is a crucial task in mid-level computer vision. Recent advancements in large-scale vision models, trained for either generative [36] or discriminative [6,29] tasks, possess emerged capabilities for dense correspondences [1,13,43,55]. This learning is primarily facilitated by computing nearest neighbor matches between image patches with their feature similarities. Notably, the correspondences induced by these models can achieve comparable or even better performances compared to the methods explicitly designed for this purpose. However, a notable limitation arises: these models often struggle to retain the global structure of the correspondences. This can be attributed to the distortions and discontinuities in the nearest-neighbor search process. + +While contemporary methods [55] have attempted to mitigate this problem by integrating features from different layers and networks, this approach only indirectly confronts the fundamental issue—the lack of structure in the correspondence maps. Fundamentally, point-wise correspondences are inherently susceptible to noise. Therefore, imposing a global structure on the correspondence maps is crucial for attaining high-quality correspondences without supervision + +In this work, we leverage functional maps [30] to tackle the above challenge. Originating from computer graphics, functional maps present a robust alternative to point-to-point correspondences [4,17,26]. They represent dense correspondences as linear mappings between function spaces, usually defined on 3D shapes. The key aspect of functional maps is their ability to capture deformations that align one manifold with another. Owing to their low-dimensional yet expressive nature, functional maps effectively incorporate global structures into the matching process. This approach provides a compelling solution to the challenges inherent in traditional point-wise correspondence methods. + +Specifically, we improve zero-shot feature-based correspondence methods by transitioning from the pixel space to the function space, thereby enhancing the method's coherence and effectiveness. Traditional functional maps on manifolds rely on two geometric inputs: the Laplacian operator, which is crucial for computing the eigenfunction basis, and a local geometric descriptor, for the application of regularization losses. We adapt these components to the realm of images by employing visual features extracted from two distinct large vision models. Our approach diverges from traditional methods, which typically identify corresponding pixels between images through nearest neighbor search. Instead, we concentrate on optimizing a linear function map established on the eigenfunction basis defined by the first feature map, with the second feature map serving as a geometric regularizer. This process, notably unsupervised, marks a significant difference from conventional methods. Further augmenting our method's robustness, especially against occlusions, is the incorporation of a transformer module for tackling partial shape matching, as detailed in partial functional maps et al. [2]. Such integration of functional map concepts with feature-based methods in image analysis represents a cohesive and logical advancement in tackling the challenges of correspondence tasks. + +We evaluate our framework on dense correspondence across various base networks, demonstrating consistent enhancements in matching accuracy and other functional properties like smoothness compared to the traditional nearest neighbor search. We highlight the qualitative results of our approach on the challenging cases with significant shape variations, viewpoint changes, and occlusions. We further demonstrate our effectiveness on keypoint correspondences and object affordance map transfer, showcasing its versatility in diverse scenarios. + +In summary, our primary contribution is a novel zero-shot framework designed to derive correspondence maps from pre-trained features. Central to our approach is the concept of optimizing a functional map that establishes a relationship between the entire image contents, moving away from the conventional method of direct pixel-to-pixel correspondence searches. Our experimental results, evaluated on various standard datasets, demonstrate that our method produces correspondences that are not only smoother and more accurate but also exhibit greater global coherence compared to previous efforts. We believe that our techniques effectively uncover the underlying correspondence capabilities of the large-scale backbone networks. We hope that our work will serve as an inspiration for future research in general-purpose object correspondence. + +# 2 Related Work + +Emergent correspondence from vision models Deep image networks have demonstrated remarkable robustness to geometric transformations, such as rotation, scaling, and perspective changes, leading to the emergence of dense correspondences [9, 28, 32, 39, 50, 54]. These transformations, predominantly rigid in nature, have been a focal point in previous studies. The research by Amir et al. [1] revealed that features extracted from DINOv1 [6] not only act as effective dense visual descriptors but also naturally induce semantic correspondences without direct supervision. This capability is further amplified in its successor, DINOv2 [29]. Beyond discriminative models, recent explorations have shown that generative models, such as diffusion models, also unveil emergent dense correspondences within their latent features [13, 43, 55]. Intriguingly, Zhang et al. [55] discovered that combining features from DINOv2 [29] with those from Stable Diffusion [36] significantly enhances correspondence quality. + +Our study highlights a crucial gap: existing methods lack structural awareness when computing correspondences by nearest-neighbor queries of per-pixel features. Here, we propose representing the correspondence map within a functional space, offering a novel approach to this challenge. + +Semantic correspondence Semantic correspondence [22] seeks to establish pixelwise matches across objects differing in poses, appearances, deformations, or even categories. Traditional approaches generally involve three stages [49]: feature extraction, cost volume construction, and displacement field [45-48] or parameterized transformation regression [15, 16, 33, 34, 40]. However, their reliance on smooth displacement fields or locally affine transformations hinders their ability to model complex object deformations or shape variations effectively. + +Recent developments, inspired by the classical congealing method [18], focus on aligning multiple objects within the same class using learning techniques like DINOv1 features [10, 27] or GAN-synthesized data [31]. Despite their strong assumptions about data rigidity, these studies suggest that leveraging features and information from diverse tasks can enhance the quality of dense image correspondences. In our work, we further demonstrate that a structure-aware fusion of features learned from multiple tasks can significantly improve the quality of correspondence maps. + +Functional maps Initially introduced by Ovsjanikov et al. [30] and further expanded by Aubry et al. [3], functional maps offer a method to represent shape correspondences as linear transformations between spectral embeddings. This is achieved using compact matrices based on eigenfunction basis. Enhancements in accuracy, efficiency, and robustness have been realized in subsequent studies [4, 14, 17, 26]. Moving away from traditional methods dependent on hand-crafted features [3, 42], recent developments have introduced various learning-based functional map frameworks. These utilize shape features learned via pairwise label supervision [21], geometric priors [11,37], or robust mesh features [5,8,19,41]. While traditionally employed for full-shape correspondence, functional maps have also been adapted to handle partial correspondences [2,35], thus aligning more closely with real-world scenarios. + +While functional maps are extensively explored for 3D shape representations like meshes and point clouds, their application to 2D images has been limited due to the ambiguous manifold structure of RGB-value representations [51, 52]. Previous attempts at applying these maps to super-pixel image representations and utilizing their eigenfunctions as a basis [51, 52] typically result in significant information loss. This is often due to the coarse nature of pre-segmentation in images and the resultant inconsistency in super-pixel representation. In our work, we address these challenges by using the entire image as input for a large vision model, ensuring a consistent initial representation and stable global structure during transformations by functional maps. + +# 3 Method + +# 3.1 Preliminaries + +Functional map Originally introduced in Ovsjanikov et al. [30], the functional map is a method for representing dense correspondences in the function space. This approach is based on the concept of mapping between function spaces defined on manifolds. Specifically, given two manifolds $\mathcal{M}$ and $\mathcal{N}$ , we consider the spaces $\mathcal{F}(\mathcal{M},\mathbb{R})$ and $\mathcal{F}(\mathcal{N},\mathbb{R})$ , each comprising all real-valued scalar functions on these manifolds, denoted as $\varphi^{\mathcal{M}}:\mathcal{M}\to \mathbb{R}$ and $\varphi^{\mathcal{N}}:\mathcal{N}\to \mathbb{R}$ , respectively. We can express a bijective mapping $T:\mathcal{M}\rightarrow \mathcal{N}$ as a linear mapping between these function spaces, as follows: + +$$ +T _ {F}: \mathcal {F} (\mathcal {M}, \mathbb {R}) \rightarrow \mathcal {F} (\mathcal {N}, \mathbb {R}), \quad f \mapsto T _ {F} (f). \tag {1} +$$ + +![](images/dadbcc4c64ffed7c21636646208293159305c93b0a59b27453480501dde64093.jpg) +Fig. 2: Eigenfunctions of the image Laplacian. We visualize the eigenfunctions of the graph Laplacian operator corresponding to the first 5 smallest eigenvalues $\lambda_1, \dots, \lambda_5$ (low frequency) as well as $\lambda_{10}, \lambda_{20}, \lambda_{50}$ (high frequency). + +To compute these mappings effectively, we expand the function spaces $\mathcal{F}(\mathcal{M},\mathbb{R})$ and $\mathcal{F}(\mathcal{N},\mathbb{R})$ by introducing sets of basis functions, $\Phi^{\mathcal{M}} = \{\varphi_i^{\mathcal{M}}\}$ and $\Phi^{\mathcal{N}} = \{\varphi_i^{\mathcal{N}}\}$ , for $\mathcal{M}$ and $\mathcal{N}$ , respectively. Thus, any real-valued function $f\in \mathcal{F}(\mathcal{M},\mathbb{R})$ can be represented as a linear combination of these basis functions: $f = \sum_{i}a_{i}\varphi_{i}^{\mathcal{M}}$ . Applying the operator $T_{F}$ to $f$ leads to the equation: + +$$ +T _ {F} (f) = T _ {F} \left(\sum_ {i} a _ {i} \varphi_ {i} ^ {\mathcal {M}}\right) = \sum_ {i} a _ {i} T _ {F} \left(\varphi_ {i} ^ {\mathcal {M}}\right). \tag {2} +$$ + +Each transformed function $T_{F}(\varphi_{i}^{\mathcal{M}}) \in \mathcal{F}(\mathcal{N},\mathbb{R})$ can be further decomposed into a linear combination of $\varphi_j^\mathcal{N}$ . Hence, we have $T_{F}(\varphi_{i}^{\mathcal{M}}) = \sum_{j}c_{ij}\varphi_{j}^{\mathcal{N}}$ , leading to: + +$$ +T _ {F} (f) = \sum_ {i} a _ {i} \sum_ {j} c _ {i j} \varphi_ {j} ^ {\mathcal {N}} = \sum_ {h} \sum_ {i} a _ {i} c _ {i j} \varphi_ {j} ^ {\mathcal {N}}. \tag {3} +$$ + +For simplicity, the function $f$ is represented in a vector form with coefficients $\mathbf{a} = (a_{1}, a_{2}, \dots)^{t}$ . Consequently, the transformation $T_{F}$ on $\mathbf{a}$ is given by $T_{F}(\mathbf{a}) = \mathbf{C}\mathbf{a}$ , where $\mathbf{C}$ is a matrix with elements $c_{ij}$ , representing the $j$ -th coefficient of $T_{F}(\varphi_{i}^{\mathcal{M}})$ in the basis $\{\varphi_{j}^{\mathcal{N}}\}$ . + +To translate the functional map into point-to-point correspondences, we treat each point as a Dirac delta function in the function space. Specifically, this conversion from the functional to the point-wise map is executed via a nearest neighbor search between the rows of $\mathbf{C}\Phi^{\mathcal{M}}$ and $\Phi^{\mathcal{N}}$ . + +Deep partial functional map The functional map framework, while adept at modeling perfect correspondence mappings between complete shapes [30], faces challenges when applied to real-world data that often have missing data and noise. This has led to the development of partial functional maps, as discussed in [2, 35]. + +The primary challenge in adapting functional maps to partial shapes is the disruption of core assumptions, such as manifold completeness and bijective + +mappings. Atta et al. [2] address this challenge by introducing a feature refinement network, denoted as $g_{\mathcal{R}}$ , which enhances the robustness of partial functional maps against shape partiality. + +Consider $M$ and $N$ as discretizations of the partial shapes $\mathcal{M}$ and $\mathcal{N}$ , respectively. We construct a bipartite graph $(\mathcal{V},\mathcal{E})$ , with edges connecting every point $\mathbf{x} \in M$ to every point $\mathbf{y} \in N$ . The refinement module inputs per-point features $F^{M}$ and $F^{N}$ , and updates these features via message passing on the bipartite graph. This process employs an attention mechanism, formulated as + +$$ +m _ {\epsilon \rightarrow i} = \sum_ {j, (i, j) \in \mathcal {E}} \operatorname {s o f t m a x} _ {j} \left(q _ {i} ^ {T} k _ {j} / \sqrt {d}\right) v _ {j}, \tag {4} +$$ + +and the final updated value of node $i$ is given by + +$$ +x _ {0} = x _ {0} + x _ {\text {p o s}}, \quad x _ {i + 1} = x _ {i} + \operatorname {M L P} \left( \right.\left[ \right. x _ {i} \left. \right\| m _ {\epsilon \rightarrow i} \left. \right]\left. \right), \tag {5} +$$ + +where $x_{\mathrm{pos}}$ represents the positional embedding, $[\cdot \| \cdot ]$ denotes concatenation, and MLP is a multilayer perceptron with ReLU activations and instance normalization. The refined features on the shape pair are denoted as $g_{\mathcal{R}}(F^M)$ and $g_{\mathcal{R}}(F^{N})$ . + +To understand this message passing process, consider a region $\Omega$ exclusive to shape $M$ and absent in shape $N$ . Let $F_{\Omega}$ denote a feature assignment function restricted to $\Omega$ . When projecting these features onto the function basis, the functional map equation becomes: + +$$ +\mathbf {C} \varphi^ {M} F _ {\Omega} (M) = \varphi^ {N} F _ {\Omega} (N). \tag {6} +$$ + +This equation holds true if and only if $F_{\Omega}(\mathbf{x}) = 0$ implies $F_{\Omega}(\mathbf{y}) = 0$ for $\mathbf{x} \in M, \mathbf{y} \in N$ . Hence, effective communication between the regions on $M$ and $N$ is crucial, enabling feature synchronization over overlapping regions while diminishing the influence of features outside these overlaps. + +# 3.2 Feature Consensus with Functional Maps + +An overview of our framework is depicted in Fig. 1. Given a pair of images $M$ and $N$ , our setup includes two distinct pixel-wise feature extraction networks, yielding two sets of features: $E^{M}, E^{N}$ and $F^{M}, F^{N}$ . For instance, $E^{M}$ and $E^{N}$ might be DINOv2 features, while $F^{M}$ and $F^{N}$ could be Stable Diffusion features. + +The primary objective is to derive a functional map $\mathbf{C}$ between the two function spaces $\mathcal{F}(M,\mathbb{R})$ and $\mathcal{F}(N,\mathbb{R})$ . The core of our method involves using $E^{M}$ and $E^{N}$ to calculate the Laplacian eigenfunction basis and apply $F^{M}$ and $F^{N}$ for introducing regularizations in optimizing the functional map. In essence, our method optimizes the functional map derived from one set of features to achieve a "consensus" with the other set, providing a more comprehensive and robust mapping between the function spaces of the images. + +Image Laplacian from visual features For an image feature of dimensions $(h, w)$ , where $h$ is the height and $w$ is the width, we view it as a grid graph comprising $h \times w$ nodes; each node is connected to its four adjacent neighbors. However, a + +graph constructed naively would lack awareness of the image content, and its Laplacian eigenspaces would correspond to the conventional Fourier frequency space. + +Instead, we assign weights to the graph edges based on the first set of image features $E^{M}$ and $E^{N}$ . For two adjacent patches $\mathbf{x}$ and $\mathbf{y}$ in image $M$ (a similar definition applies for $N$ ), the weight of the edge between them is given by: + +$$ +\| e _ {\mathbf {x y}} \| = \exp \left(- \frac {\| E _ {\mathbf {x}} ^ {M} - E _ {\mathbf {y}} ^ {M} \|}{\sigma}\right), \tag {7} +$$ + +where $\sigma$ denotes the median of all the feature values. + +Next, we compute the graph Laplacian $\varDelta_M$ and utilize its eigenfunctions as the basis. In alignment with previous research, we adopt a reduced function space spanned by the first 200 eigenfunctions. To compute the Laplacian eigen decompositions, we employ the LOBPCG algorithm, known for its efficiency. Fig. 2 presents examples of these Laplacian eigenfunctions. + +Feature as function regularizer For the second set of features $F^M$ and $F^N$ , we employ them as descriptor functions and impose a constraint on $\mathbf{C}$ such that $\mathbf{C}F^M \approx F^N$ . Given the incompleteness of shape correspondences in image pairs, due for example to occlusion within the object and by other objects, we utilize the attention-based feature refinement network $g_{\mathcal{R}}$ from deep partial functional maps [2]. This network refines the features, which are then projected onto the function basis: + +$$ +\tilde {F} ^ {M} = \varphi^ {M} g _ {\mathcal {R}} \left(F ^ {M}\right), \quad \tilde {F} ^ {N} = \varphi^ {N} g _ {\mathcal {R}} \left(F ^ {N}\right). \tag {8} +$$ + +The descriptor-preserving loss applied to these refined features is formulated as: + +$$ +\mathcal {L} _ {\text {f e a t}} = \left\| \mathbf {C} \tilde {F} ^ {M} - \tilde {F} ^ {N} \right\| _ {2}. \tag {9} +$$ + +To enhance the regularity of the functional map, our optimization objective incorporates two additional regularization terms. First, we integrate a compactness regularization into the functional map matrix: + +$$ +\mathcal {L} _ {\mathrm {d i a g}} = \left(\left| \lambda_ {i} ^ {M} - \lambda_ {j} ^ {N} \right| c _ {i j}\right) ^ {2}, \tag {10} +$$ + +where $\lambda_{i}^{M}$ and $\lambda_{j}^{N}$ represent the $i$ -th and $j$ -th eigenvalues of the graph Laplacian matrices $\Delta_{M}$ and $\Delta_{N}$ , respectively. For images with similar spectral distributions of eigenvalues, minimizing $\mathcal{L}_{\mathrm{diag}}$ encourages a near-diagonal structure in $\mathbf{C}$ . This regularization is based on the principle that eigenvalues' magnitudes are indicative of the frequencies of their corresponding eigenfunctions, and eigenfunctions with similar frequencies are more likely to correspond, as suggested by Huang et al. [14]. + +Next, we introduce a bijectivity constraint to the functional map: + +$$ +\mathbf {C} ^ {M \rightarrow N} \cdot \mathbf {C} ^ {N \rightarrow M} = \mathbf {I}. \tag {11} +$$ + +This can be interpreted as a special instance of the cycle-consistency regularization for image collections as in Wang et al. [51] when the number of images is two. + +To implement this constraint, in line with Wang et al. [51], we define two sets of estimizable latent bases: $\mathbf{Z}^M = \{Z_i^M\}$ and $\mathbf{Z}^N = \{Z_i^N\}$ , corresponding to the function spaces $\mathcal{F}(M,\mathbb{R})$ and $\mathcal{F}(N,\mathbb{R})$ of both source and target images. The consistency loss is then defined as: + +$$ +\mathcal {L} _ {\text {c o n s}} = \left\| \mathbf {C Z} ^ {M} - \mathbf {Z} ^ {N} \right\| _ {2}. \tag {12} +$$ + +To prevent degenerate solutions where $\mathbf{Z}^M$ and $\mathbf{Z}^N$ could be trivially zero, we introduce an additional constraint requiring both $\mathbf{Z}^M$ and $\mathbf{Z}^N$ to satisfy $\mathbf{Z}^t\mathbf{Z} = \mathbf{I}$ . Integrating all these components, our final optimization objective is: + +$$ +\begin{array}{l} \operatorname {a r g m i n} _ {\mathbf {C}} \mathcal {L} _ {\text {f e a t}} + \lambda_ {\text {d i a g}} \mathcal {L} _ {\text {d i a g}} + \lambda_ {\text {c o n s}} \mathcal {L} _ {\text {c o n s}}, \tag {13} \\ s. t. \quad (\mathbf {Z} ^ {M}) ^ {t} \mathbf {Z} ^ {M} = \mathbf {I}, (\mathbf {Z} ^ {N}) ^ {t} \mathbf {Z} ^ {N} = \mathbf {I}. \\ \end{array} +$$ + +Optimization We jointly optimize the weights of the image feature refinement network $g_{\mathcal{R}}$ , the functional map $\mathbf{C}$ , and the latent basis $\mathbf{Z}^{M}$ and $\mathbf{Z}^{N}$ for the input image pair. The full loss function is formulated as: + +$$ +\begin{array}{l} \mathcal {L} = \mathcal {L} _ {\mathrm {f e a t}} + \lambda_ {\mathrm {d i a g}} \mathcal {L} _ {\mathrm {d i a g}} + \lambda_ {\mathrm {c o n s}} \mathcal {L} _ {\mathrm {c o n s}} \\ + \lambda_ {Z} \left(\operatorname {t r} \left((\mathbf {Z} ^ {M}) ^ {t} \mathbf {W} \mathbf {Z} ^ {M}\right) + \operatorname {t r} \left((\mathbf {Z} ^ {N}) ^ {t} \mathbf {W} \mathbf {Z} ^ {N}\right)\right) \tag {14} \\ + \lambda_ {\mathrm {r e g}} \left(\left\| (\mathbf {Z} ^ {M}) ^ {t} \mathbf {Z} ^ {M} - \mathbf {I} \right\| _ {2} + \left\| (\mathbf {Z} ^ {N}) ^ {t} \mathbf {Z} ^ {N} - \mathbf {I} \right\| _ {2}\right), \\ \end{array} +$$ + +where $\mathbf{W} = \mathbf{I} + \mathbf{C}^t\mathbf{C}$ . The terms $\operatorname{tr}(\mathbf{Z}^t\mathbf{W}\mathbf{Z})$ are variations of Eq. (13) with $\mathbf{Z}^M$ and $\mathbf{Z}^N$ as the primary variables rather than $\mathbf{C}$ , as discussed in Wang et al. [51]. + +# 4 Experiments + +Dataset We evaluate our method primarily on the TSS dataset [44], comprising 400 image pairs from three subsets: FG3DCAR [20], JODS [38], and PASCAL [12], all of which include dense correspondence annotations. Additionally, we perform evaluations on the SPair-71k dataset [24], which features sparse annotations of keypoint correspondences across 18 categories. For this dataset, we sample 20 pairs from each category for our analysis, following the prior work [55]. + +Baselines Our comparison primarily focuses on emergent correspondences from various visual models and feature fusion techniques. We utilize feature extraction networks such as DINOv1 (ViT-S/8), DINOv2 (ViT-S/14 and ViT-B/14), and Stable Diffusion, which are prevalent and extensively researched in a wide range of visual perception tasks. In terms of feature fusion, we benchmark against the feature concatenation approach proposed by Zhang et al. [55], testing different combinations of features. Additionally, we list other methods designed for image correspondence tasks that involve stronger supervision or task-specific designs. + +Table 1: Results for dense correspondences on TSS [44]. The baselines are classified into three categories based on their training setups: supervised, unsupervised with task-specific designs, and zero-shot methods without task- or dataset-specific designs. * indicates backbones fine-tuned on this dataset. + +
SettingMethodFG3DCarJODSPascalAvg.
SupervisedSCOT [23]95.381.357.778.1
CATs* [7]92.178.964.278.4
PWarpC-CATs* [49]95.585.085.588.7
Unsupervised task-specificCNNGeo [33]90.176.456.374.4
PARN [15]89.575.971.278.8
GLU-Net [46]93.273.371.179.2
Semantic-GLU-Net [48]95.382.278.285.2
Unsupervised zero-shotDINOv1-ViT-S/8 [1]68.744.736.752.7
DINOv2-ViT-B81.268.451.569.4
Stable Diffusion (SD)92.162.648.472.5
Concat. DINOv2 + SD [55]92.973.859.678.7
FMap DINOv2(basis) + DINOv2(loss)83.569.252.771.0
FMap SD(basis) + SD(loss)80.063.451.567.8
FMap DINOv2(basis) + SD(loss) (ours)84.870.453.572.2
FMap DINOv2(loss) + SD(basis) (ours)93.174.059.978.9
+ +Evaluation metrics For both dense and sparse correspondences, we adopt the Percentage of Correct Keypoints (PCK) metric [53] with a threshold of $\kappa \cdot \max(h, w)$ , where $\kappa$ is a positive integer, and $(h, w)$ represents the image dimensions in the TSS dataset or the instance bounding-box dimensions in the SPair-71k dataset. Additionally, for dense correspondences on the TSS dataset, we assess spatial coherence using a smoothness metric [55]. This involves extracting a semantic flow (i.e., a 2D motion vector field from the source to the target image) and computing its first-order difference. In the case of sparse correspondences on the Spair-71k dataset, we further calculate the Mean Squared Error (MSE) on the keypoints to quantify mapping distortions. + +# 4.1 Dense Correspondence + +Table 1 presents the results of dense correspondences on the TSS dataset. Following [55], we majorly compare to other zero-shot unsupervised methods, among which we achieve the best performances. Specifically, we outperform Zhang et al. [55] with the same pair of features by utilizing the features in a more structure-aware manner. We also list as references the performances of fully supervised methods and unsupervised methods with task-specific training. + +We also evaluate an ablated version of our framework by computing the basis functions and losses using the same set of features (the third and fourth rows from the last), which give significantly worse results compared to our full model. On the other side, it can still give better results than directly using one feature with nearest neighbor queries (for example, FMap DINOv2(basis) + DINOv2(loss) versus DINOv2-ViT-B/14). This shows that structure-awareness + +![](images/a119e8696e30fcd8daddfaa65976a80350bc418d0d8c63c586ebc3e00ccb69e1.jpg) +Fig. 3: Dense correspondences on SPair-71k [24] Image Pairs. Each example displays pixel-wise mappings from source to target images in rainbow colors (second column for source coordinates, fourth and fifth columns for computed target coordinates) and color transfers (last two columns). Specifically, we demonstrate the challenging examples including significant viewpoint changes (first and second row), shape variations (first and third row), and occlusions (third row). Our framework achieves more consistent mappings with its global structure-awareness. + +can naturally lead to better correspondences even without introducing any additional information. + +Fig. 3 shows the qualitative results of dense correspondences computed with the DINOv2-ViT-B/14 and Stable Diffusion networks. We compare side-by-side the feature fusion results using pre-normalized concatenation [55] and our method. In all these examples, our framework provides smoother and more consistent mappings with its global structure-awareness. Specifically, we highlight two challenging examples: the airplanes in the second row with large camera-view changes, and the birds in the third row with large shape variations as well as occlusions. We also visualize the matrices for the linear functional maps in Fig. 6. + +Feature fusion with different networks Tab. 2 presents the accuracy and smoothness of correspondences derived from features of various network backbones. When compared to using individual features or their concatenation [55], our functional-map-based framework demonstrates superior results in both metrics across all tested configurations. + +Feature fusion with different layers Tab. 3 presents the results of fusing features from different layers within the same network. Our experiments involve layers 9 and 11 of DINOv2-ViT-S/14 and DINOv2-ViT-B/14. In all tested setups, our framework demonstrates superior performance compared to baseline methods. + +Additionally, a comparative analysis was performed on the choice of layers for DINOv2-ViT-B/14, specifically by fusing the features of layer 11 with those of + +Table 2: Fusing the features from different networks. + +
MethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv1-ViT-S/8raw53.976.846.112.90
DINOv2-ViT-S/14raw69.685.030.87.98
DINOv2-ViT-B/14raw69.487.830.910.46
Stable Diffusion (SD)raw72.583.837.56.41
DINOv1-ViT-S/8Concat. [55]69.988.131.010.33
+ DINOv2-ViT-B/14FMap (ours)72.290.327.77.95
DINOv2-ViT-S/14 + SDConcat. [55]78.189.927.56.58
FMap (ours)71.590.026.36.47
DINOv2-ViT-B/14 + SDConcat. [55]78.790.726.46.81
FMap (ours)78.991.126.15.74
+ +Table 3: Fusing the features from different layers of the same network. + +
BackboneMethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv2-ViT-S/14Layer967.284.836.59.64
Layer1170.888.131.09.25
Concat. [55]70.588.131.09.25
FMap (ours)70.889.129.16.60
DINOv2-ViT-B/14Layer957.285.434.510.66
Layer1169.487.830.910.46
Concat. [55]70.087.930.910.24
FMap (ours)70.689.825.98.27
+ +layers 8, 9, 10, and layer 11 tokens. The results, as depicted in Tab. 4, indicate that our functional map approach consistently surpasses both raw and concatenated features across all layer combinations. We also observed that greater feature distinction occurs when the two layers are more distant from each other. Our framework effectively leverages this distinction, extracting better correspondences by integrating their information. As shown in Tab. 4, optimal performance in EPE is achieved using features from layers 8 and 11. + +# 4.2 More Results + +Keypoint correspondence Tab. 5 presents the results for sparse keypoint correspondences on SPair-71k [24]. Compared to feature concatenation [55], our method demonstrates comparable or higher PCK (with different thresholds) and exhibits lower MSE errors. Note that the selected keypoints are extremely sparse on the images, which could potentially introduce sampling biases compared to evaluations of dense correspondences. + +Fig. 4 showcases qualitative keypoint matching results. Our method is compared side-by-side with results obtained using feature concatenation, where our approach consistently demonstrates robustness in these challenging scenarios + +Table 4: Results on different layer choices for feature fusion. This experiment involves DINOv2-ViT-B/14, wherein its layer 11 features (values) are fused with layers 8, 9, 10, and layer 11 tokens, respectively. + +
MethodLayer 8Layer 9Layer 10Layer 11 token
EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓
Raw [1]59.116.1056.816.0656.815.4053.313.20
Concat. [55]53.514.8055.413.9056.716.7055.316.10
FMap (ours)41.811.9545.29.5241.912.4345.310.65
Concat.
FMap (ours)
+ +(a) Image pairs with similar geometric properties. (a) The baseline method incorrectly maps (a) the right ear of the horse to the left ear, (b) the right ear of the cow to the left ear, and (c) a point corresponding to the front feet of the horse to the hind feet. + +Fig. 4: Sparse keypoint correspondences on SPair-71k [24] image pairs. Correct matches are connected with blue lines and incorrect matches with red lines. +![](images/9ce91c64156e4470f510c87c8e5f57e33a827907c0a7d23026015ea296047627.jpg) +(b) Image pairs with significant differences in shapes and viewpoints. The baseline method incorrectly maps (a) all points on the pot to the plant, (b) a point on the child's ear to the woman's cheek, and (c) a point at the seat corner to another chair's armrest. + +and effectively captures the geometric properties of the features. Fig. 4a further illustrates the effectiveness of our method in scenarios where the target image contains many similar points, like the legs of a horse. In contrast, the baseline struggles to capture the global structure, often leading to mappings of similar but incorrect points. + +Affordance transfer We further showcase an application of our method in transferring tool affordances between images from the RGB-D Part Affordance Dataset [25]. This dataset features different types of affordances annotated on each object, represented as heat maps. Fig. 5 illustrates our results in transferring these affordance heat maps. Such distributional functions across pixels pose a challenge to raw pixel-wise maps due to the potential distortion of their overall structure during interpolation. However, these functions can be naturally modeled with functional maps, as our approach demonstrates. + +Table 5: Results for sparse keypoint correspondences on SPair-7k [24]. All results in this experiment are with the DINOv2-ViT-B/14 backbone. + +
MethodPCK@0.1↑PCK@0.2↑MSE↓
DINOv252.368.0105.0
Stable Diffusion51.264.1120.5
Concat. [55]57.272.297.2
FMap (ours)55.372.688.0
+ +![](images/f7405dbb95a23c27cd1304dc54112e4f0bb28731c4800b11ebf3c6283602f64f.jpg) +Fig. 5: Transferring tool affordances represented as heat maps. We treat affordance heat maps as functions defined on the source and the target image. By optimizing the functional map between the source and the target, we manage to transfer the function after applying the functional map to it directly following Eq. (1). We employ features from DINOV2-ViT-B/14 and Stable Diffusion to compute the functional maps in this experiment. + +![](images/4c7dc59a23d5c3244e219f611a09a5fcc497b29752c3fd2440d5803b07210426.jpg) + +![](images/e96b43e66ab5f38b47ef38162204ed0b775c87c9a44e4f8c07f8e3d5a4a775ef.jpg) + +![](images/1527ee56ddff30182621917b926cd5b735e57f5d3c6f215147d24eb439b75f97.jpg) + +Ablation Studies In addition to the feature ablations shown in Tab. 1 and discussed in Sec. 4.1, we also present an ablation on the regularization terms for the functional map optimization. Tab. 6 shows the results optimized with different regularization losses. The diagonality and consistency regularizations greatly improve the accuracy of the mapping. Fig. 6 visualizes the functional map matrices with and without the regularizations. The near-diagonal mappings are preferred because they match the function basis with similar frequencies. + +# 5 Discussions + +As shown in Sec. 4.1, our functional map framework effectively integrates features from different network layers. This integration, particularly from just two distinct layers, outperforms the conventional approach of using same-layer features or naively concatenating different features. This finding opens up promising avenues for enhancing the generalization capabilities of large-scale vision models without additional fine-tuning. + +Moreover, the interpretability of learned features in the functional map framework is crucial, particularly in domains like medical imaging or autonomous systems. Our approach, as shown in Fig. 3, enables impressive image editing + +![](images/a99b7b97f8593675eee2b24c2d04422142738a02bb82bf246f76123d0cc59b55.jpg) +Fig. 6: Functional map matrices with and without regularization losses. Enforcing the compactness loss (Eq. (10)) centers the non-zero matrix entries around the diagonals to match the function basis of similar frequencies. + +Table 6: Ablation on the loss terms. All results in the experiment are with DINOv2-ViT-B/14 and Stable Diffusion on the SPair-71k dataset. + +
LossPCK@0.1↑PCK@0.2↑MSE↓
Lfeat (no regularization)44.665.595.3
Lfeat + Ldiag52.969.597.9
Lfeat + Lcons52.869.7100.3
Lfeat + Ldiag + Lcons (full loss)55.372.688.0
+ +outcomes without generative models. This leads to the intriguing possibility of combining our method with generative models to enhance image quality. + +# 6 Conclusions + +The emergence of correspondences from large-scale vision models not explicitly trained for this task is noteworthy. While nearest-neighbor analyses provide a direct exploration, they overlook the structure inherent not only in the image contents but also in the model features. Our work leverages this embedded structure via functional maps, aiming to generate point-wise accurate and globally coherent correspondences. Despite its simplicity, it significantly enhances the matching results with zero-shot inference on image pairs without additional supervision or task-specific training. While the core concepts of our approach are rooted in 3D shape correspondence literature from graphics [30], our implementation using deep feature-based functional maps bridges this area with cutting-edge vision research. + +Limitations and future work The structure-awareness of functional maps relies on the manifold assumption of its underlying domain, making our current framework more suitable for object-centric images than complex scenes with diverse compositionalities. Examples of the latter include matching a horse to a herd of horses or matching two indoor scenes. However, this issue might be addressed using additional image segmentation techniques that decompose the image into objects and parts, or by exploring matches between quotient spaces. + +# References + +1. Amir, S., Gandelsman, Y., Bagon, S., Dekel, T.: Deep vit features as dense visual descriptors. arXiv preprint arXiv:2112.05814 2(3), 4 (2021) +2. Attaiki, S., Pai, G., Ovsjanikov, M.: Dpfm: Deep partial functional maps (2021) +3. Aubry, M., Schlickewei, U., Cremers, D.: The wave kernel signature: A quantum mechanical approach to shape analysis. In: ICCV Workshops (2011) +4. Burghard, O., Dieckmann, A., Klein, R.: Embedding shapes with green's functions for global shape matching. Computers & Graphics 68, 1-10 (2017) +5. Cao, D., Bernard, F.: Unsupervised deep multi-shape matching. In: ECCV (2022) +6. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: ICCV (2021) +7. Cho, S., Hong, S., Jeon, S., Lee, Y., Sohn, K., Kim, S.: Cats: Cost aggregation transformers for visual correspondence. Advances in Neural Information Processing Systems 34, 9011-9023 (2021) +8. Donati, N., Corman, E., Ovsjanikov, M.: Deep orientation-aware functional maps: Tackling symmetry issues in shape matching. In: CVPR (2022) +9. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T.: D2-net: A trainable cnn for joint description and detection of local features. In: CVPR (2019) +10. Gupta, K., Jampani, V., Esteves, C., Shrivastava, A., Makadia, A., Snavely, N., Kar, A.: ASIC: Aligning sparse in-the-wild image collections. arXiv preprint arXiv:2303.16201 (2023) +1. Halimi, O., Litany, O., Rodola, E., Bronstein, A.M., Kimmel, R.: Unsupervised learning of dense shape correspondence. In: CVPR (2019) +2. Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011) +3. Hedlin, E., Sharma, G., Mahajan, S., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: Unsupervised semantic correspondence using stable diffusion. arXiv preprint arXiv:2305.15581 (2023) +4. Huang, Q., Wang, F., Guibas, L.: Functional map networks for analyzing and exploring large shape collections. ACM TOG 33(4), 1-11 (2014) +5. Jeon, S., Kim, S., Min, D., Sohn, K.: Parn: Pyramidal affine regression networks for dense semantic correspondence. In: ECCV (2018) +6. Kim, S., Lin, S., Jeon, S.R., Min, D., Sohn, K.: Recurrent transformer networks for semantic correspondence (2018) +7. Kovnatsky, A., Bronstein, M.M., Bronstein, A.M., Glashoff, K., Kimmel, R.: Coupled quasi-harmonic bases. In: Comput. Graph. Forum (2013) +8. Learned-Miller, E.G.: Data driven image models through continuous joint alignment IEEE TPAMI 28(2), 236-250 (2005) +9. Li, L., Donati, N., Ovsjanikov, M.: Learning multi-resolution functional maps with spectral attention for robust shape matching (2022) +20. Lin, Y.L., Morariu, V.I., Hsu, W., Davis, L.S.: Jointly optimizing 3d model fitting and fine-grained classification. In: ECCV (2014) +21. Litany, O., Remez, T., Rodola, E., Bronstein, A., Bronstein, M.: Deep functional maps: Structured prediction for dense shape correspondence. In: ICCV (2017) +22. Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. IEEE TPAMI 33(5), 978-994 (2010) +23. Liu, Y., Zhu, L., Yamada, M., Yang, Y.: Semantic correspondence as an optimal transport problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4463-4472 (2020) + +24. Min, J., Lee, J., Ponce, J., Cho, M.: Spair-71k: A large-scale benchmark for semantic correspondence. arXiv preprint arXiv:1908.10543 (2019) +25. Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features (2015) +26. Nogneng, D., Ovsjanikov, M.: Informative descriptor preservation via commutativity for shape matching. In: Comput. Graph. Forum (2017) +27. Ofri-Amar, D., Geyer, M., Kasten, Y., Dekel, T.: Neural congealing: Aligning images to a joint semantic atlas. In: CVPR (2023) +28. Ono, Y., Trulls, E., Fua, P., Yi, K.M.: Lf-net: Learning local features from images (2018) +29. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023) +30. Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional maps: a flexible representation of maps between shapes. ACM TOG 31(4), 1-11 (2012) +31. Peebles, W., Zhu, J.Y., Zhang, R., Torralba, A., Efros, A.A., Shechtman, E.: Gan-supervised dense visual alignment. In: CVPR (2022) +32. Revaud, J., De Souza, C., Humenberger, M., Weinzaepfel, P.: R2d2: Reliable and repeatable detector and descriptor (2019) +33. Rocco, I., Arandjelovic, R., Sivic, J.: Convolutional neural network architecture for geometric matching. In: CVPR (2017) +34. Rocco, I., Arandjelovic, R., Sivic, J.: End-to-end weakly-supervised semantic alignment. In: CVPR (2018) +35. Rodola, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial functional correspondence. In: Comput. Graph. Forum (2017) +36. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022) +37. Roufosse, J.M., Sharma, A., Ovsjanikov, M.: Unsupervised deep learning for structured shape matching. In: ICCV (2019) +38. Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: CVPR (2013) +39. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: Learning feature matching with graph neural networks. In: CVPR (2020) +40. Seo, P.H., Lee, J., Jung, D., Han, B., Cho, M.: Attentive semantic alignment with offset-aware correlation kernels. In: ECCV (2018) +41. Sharp, N., Attaiki, S., Crane, K., Ovsjanikov, M.: Diffusionnet: Discretization agnostic learning on surfaces. ACM TOG 41(3), 1-16 (2022) +42. Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Comput. Graph. Forum (2009) +43. Tang, L., Jia, M., Wang, Q., Phoo, C.P., Hariharan, B.: Emergent correspondence from image diffusion. arXiv preprint arXiv:2306.03881 (2023) +44. Taniai, T., Sinha, S.N., Sato, Y.: Joint recovery of dense correspondence and cosegmentation in two images. In: CVPR (2016) +45. Truong, P., Danelljan, M., Gool, L.V., Timofte, R.: Gocor: Bringing globally optimized correspondence volumes into your neural network (2020) +46. Truong, P., Danelljan, M., Timofte, R.: Glu-net: Global-local universal network for dense flow and correspondences. In: CVPR (2020) +47. Truong, P., Danelljan, M., Van Gool, L., Timofte, R.: Learning accurate dense correspondences and when to trust them. In: CVPR (2021) + +48. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Warp consistency for unsupervised learning of dense correspondences. In: ICCV (2021) +49. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Probabilistic warp consistency for weakly-supervised semantic correspondences. In: CVPR (2022) +50. Tyszkiiewicz, M., Fua, P., Trulls, E.: Disk: Learning local features with policy gradient (2020) +51. Wang, F., Huang, Q., Guibas, L.J.: Image co-segmentation via consistent functional maps. In: ICCV (2013) +52. Wang, F., Huang, Q., Ovsjanikov, M., Guibas, L.J.: Unsupervised multi-class joint image segmentation. In: CVPR (2014) +53. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE TPAMI 35(12), 2878-2890 (2012) +54. Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: Lift: Learned invariant feature transform In: ECCV (2016) +55. Zhang, J., Herrmann, C., Hur, J., Cabrera, L.P., Jampani, V., Sun, D., Yang, M.H.: A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. arXiv preprint arXiv:2305.15347 (2023) \ No newline at end of file diff --git a/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/images.zip b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ae20eab1ac0ce96cda7c54f6ecf93324f67ad859 --- /dev/null +++ b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ebee171fd671cf6cc92f78cb0dfe208d9d62dfe1d3a930cca394f639db3df42 +size 665355 diff --git a/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/layout.json b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bb66e79aca32414d47ec3ea3c45ea0a13360d71a --- /dev/null +++ b/2024/Zero-Shot Image Feature Consensus with Deep Functional Maps/layout.json @@ -0,0 +1,10090 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 181, + 112, + 433, + 148 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 112, + 433, + 148 + ], + "spans": [ + { + "bbox": [ + 181, + 112, + 433, + 148 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "spans": [ + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "text", + "content": "Xinle Cheng" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "text", + "content": ", Congyue Deng" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "text", + "content": ", Adam W. Harley" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "text", + "content": ", Yixin Zhu" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "inline_equation", + "content": "^{1,3}" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "text", + "content": ", Leonidas Guibas" + }, + { + "bbox": [ + 193, + 169, + 420, + 193 + ], + "type": "inline_equation", + "content": "^{2}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 169, + 198, + 444, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 169, + 198, + 444, + 209 + ], + "spans": [ + { + "bbox": [ + 169, + 198, + 444, + 209 + ], + "type": "text", + "content": "congyue@stanford.edu, yixin.zhu@pku.edu.cn, guibas@stanford.edu" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 217, + 216, + 395, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 217, + 216, + 395, + 228 + ], + "spans": [ + { + "bbox": [ + 217, + 216, + 395, + 228 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 217, + 216, + 395, + 228 + ], + "type": "text", + "content": " Institute for AI, Peking University, China" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 181, + 228, + 432, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 228, + 432, + 239 + ], + "spans": [ + { + "bbox": [ + 181, + 228, + 432, + 239 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 181, + 228, + 432, + 239 + ], + "type": "text", + "content": " Department of Computer Science, Stanford University, USA" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 187, + 239, + 425, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 239, + 425, + 250 + ], + "spans": [ + { + "bbox": [ + 187, + 239, + 425, + 250 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 187, + 239, + 425, + 250 + ], + "type": "text", + "content": " PKU-WUHAN Institute for Artificial Intelligence, China" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 137, + 266, + 480, + 347 + ], + "blocks": [ + { + "bbox": [ + 137, + 266, + 480, + 347 + ], + "lines": [ + { + "bbox": [ + 137, + 266, + 480, + 347 + ], + "spans": [ + { + "bbox": [ + 137, + 266, + 480, + 347 + ], + "type": "image", + "image_path": "336204f585cdae64e56576c1f87b995ddb44168ce5fb70f9da29caea739d186f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "lines": [ + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "spans": [ + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "text", + "content": "Fig. 1: Overview. Left: Given two sets of features, " + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "inline_equation", + "content": "E^{M}, E^{N}" + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "inline_equation", + "content": "F^{M}, F^{N}" + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "text", + "content": ", we compute the Laplacian eigenfunction basis with " + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "inline_equation", + "content": "E^{M}, E^{N}" + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "text", + "content": ", and apply regularizations to the functional map optimization using " + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "inline_equation", + "content": "F^{M}, F^{N}" + }, + { + "bbox": [ + 130, + 350, + 482, + 417 + ], + "type": "text", + "content": ". This method optimizes a mapping in the spectral domain derived from one feature set to achieve a consensus with the other set. Right: With a better understanding of the global image structure, our method produces smoother and more accurate correspondences in a zero-shot manner." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 159, + 439, + 455, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 439, + 455, + 626 + ], + "spans": [ + { + "bbox": [ + 159, + 439, + 455, + 626 + ], + "type": "text", + "content": "Abstract. Correspondences emerge from large-scale vision models trained for generative and discriminative tasks. This has been revealed and benchmarked by computing correspondence maps between pairs of images, using nearest neighbors on the feature grids. Existing work has attempted to improve the quality of these correspondence maps by carefully mixing features from different sources, such as by combining the features of different layers or networks. We point out that a better correspondence strategy is available, which directly imposes structure on the correspondence field: the functional map. Wielding this simple mathematical tool, we lift the correspondence problem from the pixel space to the function space and directly optimize for mappings that are globally coherent. We demonstrate that our technique yields correspondences that are not only smoother but also more accurate, with the possibility of better reflecting the knowledge embedded in the large-scale vision models that we are studying. Our approach sets a new state-of-the-art on various dense correspondence tasks. We also demonstrate our effectiveness in keypoint correspondence and affordance map transfer." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "spans": [ + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "text", + "content": "Keywords: Functional map " + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "text", + "content": " Zero shot image matching " + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "text", + "content": " Dense correspondence " + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 159, + 638, + 455, + 660 + ], + "type": "text", + "content": " Emergent feature property" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 229, + 127 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 229, + 127 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 229, + 127 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 144, + 482, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 144, + 482, + 263 + ], + "spans": [ + { + "bbox": [ + 130, + 144, + 482, + 263 + ], + "type": "text", + "content": "Identifying image correspondence is a crucial task in mid-level computer vision. Recent advancements in large-scale vision models, trained for either generative [36] or discriminative [6,29] tasks, possess emerged capabilities for dense correspondences [1,13,43,55]. This learning is primarily facilitated by computing nearest neighbor matches between image patches with their feature similarities. Notably, the correspondences induced by these models can achieve comparable or even better performances compared to the methods explicitly designed for this purpose. However, a notable limitation arises: these models often struggle to retain the global structure of the correspondences. This can be attributed to the distortions and discontinuities in the nearest-neighbor search process." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 266, + 482, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 266, + 482, + 339 + ], + "spans": [ + { + "bbox": [ + 130, + 266, + 482, + 339 + ], + "type": "text", + "content": "While contemporary methods [55] have attempted to mitigate this problem by integrating features from different layers and networks, this approach only indirectly confronts the fundamental issue—the lack of structure in the correspondence maps. Fundamentally, point-wise correspondences are inherently susceptible to noise. Therefore, imposing a global structure on the correspondence maps is crucial for attaining high-quality correspondences without supervision" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 340, + 482, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 340, + 482, + 449 + ], + "spans": [ + { + "bbox": [ + 130, + 340, + 482, + 449 + ], + "type": "text", + "content": "In this work, we leverage functional maps [30] to tackle the above challenge. Originating from computer graphics, functional maps present a robust alternative to point-to-point correspondences [4,17,26]. They represent dense correspondences as linear mappings between function spaces, usually defined on 3D shapes. The key aspect of functional maps is their ability to capture deformations that align one manifold with another. Owing to their low-dimensional yet expressive nature, functional maps effectively incorporate global structures into the matching process. This approach provides a compelling solution to the challenges inherent in traditional point-wise correspondence methods." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 450, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 450, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 450, + 482, + 666 + ], + "type": "text", + "content": "Specifically, we improve zero-shot feature-based correspondence methods by transitioning from the pixel space to the function space, thereby enhancing the method's coherence and effectiveness. Traditional functional maps on manifolds rely on two geometric inputs: the Laplacian operator, which is crucial for computing the eigenfunction basis, and a local geometric descriptor, for the application of regularization losses. We adapt these components to the realm of images by employing visual features extracted from two distinct large vision models. Our approach diverges from traditional methods, which typically identify corresponding pixels between images through nearest neighbor search. Instead, we concentrate on optimizing a linear function map established on the eigenfunction basis defined by the first feature map, with the second feature map serving as a geometric regularizer. This process, notably unsupervised, marks a significant difference from conventional methods. Further augmenting our method's robustness, especially against occlusions, is the incorporation of a transformer module for tackling partial shape matching, as detailed in partial functional maps et al. [2]. Such integration of functional map concepts with feature-based methods in image analysis represents a cohesive and logical advancement in tackling the challenges of correspondence tasks." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 199 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 199 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 199 + ], + "type": "text", + "content": "We evaluate our framework on dense correspondence across various base networks, demonstrating consistent enhancements in matching accuracy and other functional properties like smoothness compared to the traditional nearest neighbor search. We highlight the qualitative results of our approach on the challenging cases with significant shape variations, viewpoint changes, and occlusions. We further demonstrate our effectiveness on keypoint correspondences and object affordance map transfer, showcasing its versatility in diverse scenarios." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 200, + 482, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 200, + 482, + 331 + ], + "spans": [ + { + "bbox": [ + 130, + 200, + 482, + 331 + ], + "type": "text", + "content": "In summary, our primary contribution is a novel zero-shot framework designed to derive correspondence maps from pre-trained features. Central to our approach is the concept of optimizing a functional map that establishes a relationship between the entire image contents, moving away from the conventional method of direct pixel-to-pixel correspondence searches. Our experimental results, evaluated on various standard datasets, demonstrate that our method produces correspondences that are not only smoother and more accurate but also exhibit greater global coherence compared to previous efforts. We believe that our techniques effectively uncover the underlying correspondence capabilities of the large-scale backbone networks. We hope that our work will serve as an inspiration for future research in general-purpose object correspondence." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 350, + 237, + 361 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 350, + 237, + 361 + ], + "spans": [ + { + "bbox": [ + 132, + 350, + 237, + 361 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 366, + 482, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 366, + 482, + 521 + ], + "spans": [ + { + "bbox": [ + 130, + 366, + 482, + 521 + ], + "type": "text", + "content": "Emergent correspondence from vision models Deep image networks have demonstrated remarkable robustness to geometric transformations, such as rotation, scaling, and perspective changes, leading to the emergence of dense correspondences [9, 28, 32, 39, 50, 54]. These transformations, predominantly rigid in nature, have been a focal point in previous studies. The research by Amir et al. [1] revealed that features extracted from DINOv1 [6] not only act as effective dense visual descriptors but also naturally induce semantic correspondences without direct supervision. This capability is further amplified in its successor, DINOv2 [29]. Beyond discriminative models, recent explorations have shown that generative models, such as diffusion models, also unveil emergent dense correspondences within their latent features [13, 43, 55]. Intriguingly, Zhang et al. [55] discovered that combining features from DINOv2 [29] with those from Stable Diffusion [36] significantly enhances correspondence quality." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 522, + 482, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 522, + 482, + 571 + ], + "spans": [ + { + "bbox": [ + 130, + 522, + 482, + 571 + ], + "type": "text", + "content": "Our study highlights a crucial gap: existing methods lack structural awareness when computing correspondences by nearest-neighbor queries of per-pixel features. Here, we propose representing the correspondence map within a functional space, offering a novel approach to this challenge." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 582, + 482, + 665 + ], + "type": "text", + "content": "Semantic correspondence Semantic correspondence [22] seeks to establish pixelwise matches across objects differing in poses, appearances, deformations, or even categories. Traditional approaches generally involve three stages [49]: feature extraction, cost volume construction, and displacement field [45-48] or parameterized transformation regression [15, 16, 33, 34, 40]. However, their reliance on smooth displacement fields or locally affine transformations hinders their ability to model complex object deformations or shape variations effectively." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 213 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 213 + ], + "type": "text", + "content": "Recent developments, inspired by the classical congealing method [18], focus on aligning multiple objects within the same class using learning techniques like DINOv1 features [10, 27] or GAN-synthesized data [31]. Despite their strong assumptions about data rigidity, these studies suggest that leveraging features and information from diverse tasks can enhance the quality of dense image correspondences. In our work, we further demonstrate that a structure-aware fusion of features learned from multiple tasks can significantly improve the quality of correspondence maps." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 223, + 483, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 223, + 483, + 366 + ], + "spans": [ + { + "bbox": [ + 130, + 223, + 483, + 366 + ], + "type": "text", + "content": "Functional maps Initially introduced by Ovsjanikov et al. [30] and further expanded by Aubry et al. [3], functional maps offer a method to represent shape correspondences as linear transformations between spectral embeddings. This is achieved using compact matrices based on eigenfunction basis. Enhancements in accuracy, efficiency, and robustness have been realized in subsequent studies [4, 14, 17, 26]. Moving away from traditional methods dependent on hand-crafted features [3, 42], recent developments have introduced various learning-based functional map frameworks. These utilize shape features learned via pairwise label supervision [21], geometric priors [11,37], or robust mesh features [5,8,19,41]. While traditionally employed for full-shape correspondence, functional maps have also been adapted to handle partial correspondences [2,35], thus aligning more closely with real-world scenarios." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 367, + 483, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 367, + 483, + 487 + ], + "spans": [ + { + "bbox": [ + 130, + 367, + 483, + 487 + ], + "type": "text", + "content": "While functional maps are extensively explored for 3D shape representations like meshes and point clouds, their application to 2D images has been limited due to the ambiguous manifold structure of RGB-value representations [51, 52]. Previous attempts at applying these maps to super-pixel image representations and utilizing their eigenfunctions as a basis [51, 52] typically result in significant information loss. This is often due to the coarse nature of pre-segmentation in images and the resultant inconsistency in super-pixel representation. In our work, we address these challenges by using the entire image as input for a large vision model, ensuring a consistent initial representation and stable global structure during transformations by functional maps." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 505, + 202, + 517 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 505, + 202, + 517 + ], + "spans": [ + { + "bbox": [ + 132, + 505, + 202, + 517 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 530, + 228, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 530, + 228, + 541 + ], + "spans": [ + { + "bbox": [ + 132, + 530, + 228, + 541 + ], + "type": "text", + "content": "3.1 Preliminaries" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "spans": [ + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": "Functional map Originally introduced in Ovsjanikov et al. [30], the functional map is a method for representing dense correspondences in the function space. This approach is based on the concept of mapping between function spaces defined on manifolds. Specifically, given two manifolds " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "\\mathcal{M}" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "\\mathcal{N}" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": ", we consider the spaces " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\mathcal{M},\\mathbb{R})" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\mathcal{N},\\mathbb{R})" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": ", each comprising all real-valued scalar functions on these manifolds, denoted as " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "\\varphi^{\\mathcal{M}}:\\mathcal{M}\\to \\mathbb{R}" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "\\varphi^{\\mathcal{N}}:\\mathcal{N}\\to \\mathbb{R}" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": ", respectively. We can express a bijective mapping " + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "inline_equation", + "content": "T:\\mathcal{M}\\rightarrow \\mathcal{N}" + }, + { + "bbox": [ + 130, + 550, + 482, + 646 + ], + "type": "text", + "content": " as a linear mapping between these function spaces, as follows:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 224, + 654, + 481, + 666 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 224, + 654, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 224, + 654, + 481, + 666 + ], + "type": "interline_equation", + "content": "T _ {F}: \\mathcal {F} (\\mathcal {M}, \\mathbb {R}) \\rightarrow \\mathcal {F} (\\mathcal {N}, \\mathbb {R}), \\quad f \\mapsto T _ {F} (f). \\tag {1}", + "image_path": "646f2f1c9ec61d67bc22dea568607a9474e60da3877a88b50373b3941dd9c3d6.jpg" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 147, + 119, + 462, + 264 + ], + "blocks": [ + { + "bbox": [ + 147, + 119, + 462, + 264 + ], + "lines": [ + { + "bbox": [ + 147, + 119, + 462, + 264 + ], + "spans": [ + { + "bbox": [ + 147, + 119, + 462, + 264 + ], + "type": "image", + "image_path": "dadbcc4c64ffed7c21636646208293159305c93b0a59b27453480501dde64093.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "lines": [ + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "spans": [ + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "type": "text", + "content": "Fig. 2: Eigenfunctions of the image Laplacian. We visualize the eigenfunctions of the graph Laplacian operator corresponding to the first 5 smallest eigenvalues " + }, + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "type": "inline_equation", + "content": "\\lambda_1, \\dots, \\lambda_5" + }, + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "type": "text", + "content": " (low frequency) as well as " + }, + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "type": "inline_equation", + "content": "\\lambda_{10}, \\lambda_{20}, \\lambda_{50}" + }, + { + "bbox": [ + 130, + 266, + 482, + 300 + ], + "type": "text", + "content": " (high frequency)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "spans": [ + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": "To compute these mappings effectively, we expand the function spaces " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\mathcal{M},\\mathbb{R})" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\mathcal{N},\\mathbb{R})" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " by introducing sets of basis functions, " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\Phi^{\\mathcal{M}} = \\{\\varphi_i^{\\mathcal{M}}\\}" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\Phi^{\\mathcal{N}} = \\{\\varphi_i^{\\mathcal{N}}\\}" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": ", for " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\mathcal{M}" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "\\mathcal{N}" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": ", respectively. Thus, any real-valued function " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "f\\in \\mathcal{F}(\\mathcal{M},\\mathbb{R})" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " can be represented as a linear combination of these basis functions: " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "f = \\sum_{i}a_{i}\\varphi_{i}^{\\mathcal{M}}" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": ". Applying the operator " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "T_{F}" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 130, + 312, + 482, + 373 + ], + "type": "text", + "content": " leads to the equation:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 217, + 378, + 482, + 409 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 217, + 378, + 482, + 409 + ], + "spans": [ + { + "bbox": [ + 217, + 378, + 482, + 409 + ], + "type": "interline_equation", + "content": "T _ {F} (f) = T _ {F} \\left(\\sum_ {i} a _ {i} \\varphi_ {i} ^ {\\mathcal {M}}\\right) = \\sum_ {i} a _ {i} T _ {F} \\left(\\varphi_ {i} ^ {\\mathcal {M}}\\right). \\tag {2}", + "image_path": "0a926c55a513415b80ac34439e4df712d5fde85751e615da707959cd87de2149.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "spans": [ + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "text", + "content": "Each transformed function " + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "inline_equation", + "content": "T_{F}(\\varphi_{i}^{\\mathcal{M}}) \\in \\mathcal{F}(\\mathcal{N},\\mathbb{R})" + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "text", + "content": " can be further decomposed into a linear combination of " + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "inline_equation", + "content": "\\varphi_j^\\mathcal{N}" + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "text", + "content": ". Hence, we have " + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "inline_equation", + "content": "T_{F}(\\varphi_{i}^{\\mathcal{M}}) = \\sum_{j}c_{ij}\\varphi_{j}^{\\mathcal{N}}" + }, + { + "bbox": [ + 130, + 414, + 481, + 440 + ], + "type": "text", + "content": ", leading to:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 216, + 446, + 482, + 469 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 216, + 446, + 482, + 469 + ], + "spans": [ + { + "bbox": [ + 216, + 446, + 482, + 469 + ], + "type": "interline_equation", + "content": "T _ {F} (f) = \\sum_ {i} a _ {i} \\sum_ {j} c _ {i j} \\varphi_ {j} ^ {\\mathcal {N}} = \\sum_ {h} \\sum_ {i} a _ {i} c _ {i j} \\varphi_ {j} ^ {\\mathcal {N}}. \\tag {3}", + "image_path": "99e3a8a7458978cf90416a58de7a30697fe024693aea3b549fd3dd568b147de8.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "spans": [ + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": "For simplicity, the function " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": " is represented in a vector form with coefficients " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "\\mathbf{a} = (a_{1}, a_{2}, \\dots)^{t}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": ". Consequently, the transformation " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "T_{F}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": " on " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "\\mathbf{a}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": " is given by " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "T_{F}(\\mathbf{a}) = \\mathbf{C}\\mathbf{a}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "\\mathbf{C}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": " is a matrix with elements " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "c_{ij}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": ", representing the " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": "-th coefficient of " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "T_{F}(\\varphi_{i}^{\\mathcal{M}})" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": " in the basis " + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "inline_equation", + "content": "\\{\\varphi_{j}^{\\mathcal{N}}\\}" + }, + { + "bbox": [ + 130, + 475, + 482, + 523 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "spans": [ + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "type": "text", + "content": "To translate the functional map into point-to-point correspondences, we treat each point as a Dirac delta function in the function space. Specifically, this conversion from the functional to the point-wise map is executed via a nearest neighbor search between the rows of " + }, + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "type": "inline_equation", + "content": "\\mathbf{C}\\Phi^{\\mathcal{M}}" + }, + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "type": "inline_equation", + "content": "\\Phi^{\\mathcal{N}}" + }, + { + "bbox": [ + 130, + 524, + 482, + 571 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 581, + 482, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 581, + 482, + 640 + ], + "spans": [ + { + "bbox": [ + 130, + 581, + 482, + 640 + ], + "type": "text", + "content": "Deep partial functional map The functional map framework, while adept at modeling perfect correspondence mappings between complete shapes [30], faces challenges when applied to real-world data that often have missing data and noise. This has led to the development of partial functional maps, as discussed in [2, 35]." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 482, + 665 + ], + "type": "text", + "content": "The primary challenge in adapting functional maps to partial shapes is the disruption of core assumptions, such as manifold completeness and bijective" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 185, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 185, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 185, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": "mappings. Atta et al. [2] address this challenge by introducing a feature refinement network, denoted as " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "g_{\\mathcal{R}}" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": ", which enhances the robustness of partial functional maps against shape partiality." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "spans": [ + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": "Consider " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": " as discretizations of the partial shapes " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "\\mathcal{M}" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "\\mathcal{N}" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": ", respectively. We construct a bipartite graph " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "(\\mathcal{V},\\mathcal{E})" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": ", with edges connecting every point " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "\\mathbf{x} \\in M" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": " to every point " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "\\mathbf{y} \\in N" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": ". The refinement module inputs per-point features " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "F^{M}" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "inline_equation", + "content": "F^{N}" + }, + { + "bbox": [ + 130, + 152, + 482, + 213 + ], + "type": "text", + "content": ", and updates these features via message passing on the bipartite graph. This process employs an attention mechanism, formulated as" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 230, + 219, + 482, + 244 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 230, + 219, + 482, + 244 + ], + "spans": [ + { + "bbox": [ + 230, + 219, + 482, + 244 + ], + "type": "interline_equation", + "content": "m _ {\\epsilon \\rightarrow i} = \\sum_ {j, (i, j) \\in \\mathcal {E}} \\operatorname {s o f t m a x} _ {j} \\left(q _ {i} ^ {T} k _ {j} / \\sqrt {d}\\right) v _ {j}, \\tag {4}", + "image_path": "cf2e11264e67e5012438b74c5d1cf686b24854b20a3c825986c7fa29195d11fc.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 250, + 348, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 250, + 348, + 262 + ], + "spans": [ + { + "bbox": [ + 131, + 250, + 348, + 262 + ], + "type": "text", + "content": "and the final updated value of node " + }, + { + "bbox": [ + 131, + 250, + 348, + 262 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 131, + 250, + 348, + 262 + ], + "type": "text", + "content": " is given by" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 210, + 270, + 481, + 283 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 210, + 270, + 481, + 283 + ], + "spans": [ + { + "bbox": [ + 210, + 270, + 481, + 283 + ], + "type": "interline_equation", + "content": "x _ {0} = x _ {0} + x _ {\\text {p o s}}, \\quad x _ {i + 1} = x _ {i} + \\operatorname {M L P} \\left( \\right.\\left[ \\right. x _ {i} \\left. \\right\\| m _ {\\epsilon \\rightarrow i} \\left. \\right]\\left. \\right), \\tag {5}", + "image_path": "2c7a7ffaa96c865b3d09532a022496698e7e61d6ce94a789dde9d929044a0bb4.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "inline_equation", + "content": "x_{\\mathrm{pos}}" + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "text", + "content": " represents the positional embedding, " + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "inline_equation", + "content": "[\\cdot \\| \\cdot ]" + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "text", + "content": " denotes concatenation, and MLP is a multilayer perceptron with ReLU activations and instance normalization. The refined features on the shape pair are denoted as " + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "inline_equation", + "content": "g_{\\mathcal{R}}(F^M)" + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "inline_equation", + "content": "g_{\\mathcal{R}}(F^{N})" + }, + { + "bbox": [ + 130, + 289, + 482, + 326 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "spans": [ + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "content": "To understand this message passing process, consider a region " + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "inline_equation", + "content": "\\Omega" + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "content": " exclusive to shape " + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "content": " and absent in shape " + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "inline_equation", + "content": "F_{\\Omega}" + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "content": " denote a feature assignment function restricted to " + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "inline_equation", + "content": "\\Omega" + }, + { + "bbox": [ + 130, + 326, + 481, + 373 + ], + "type": "text", + "content": ". When projecting these features onto the function basis, the functional map equation becomes:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 253, + 380, + 481, + 394 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 253, + 380, + 481, + 394 + ], + "spans": [ + { + "bbox": [ + 253, + 380, + 481, + 394 + ], + "type": "interline_equation", + "content": "\\mathbf {C} \\varphi^ {M} F _ {\\Omega} (M) = \\varphi^ {N} F _ {\\Omega} (N). \\tag {6}", + "image_path": "dec103080ccf051abfccc42ccab89c1a4dcc8d6f8fcdcd1ea77ea7cceccb2f8b.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "spans": [ + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "content": "This equation holds true if and only if " + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "inline_equation", + "content": "F_{\\Omega}(\\mathbf{x}) = 0" + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "content": " implies " + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "inline_equation", + "content": "F_{\\Omega}(\\mathbf{y}) = 0" + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "inline_equation", + "content": "\\mathbf{x} \\in M, \\mathbf{y} \\in N" + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "content": ". Hence, effective communication between the regions on " + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 131, + 400, + 482, + 449 + ], + "type": "text", + "content": " is crucial, enabling feature synchronization over overlapping regions while diminishing the influence of features outside these overlaps." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 131, + 466, + 369, + 479 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 466, + 369, + 479 + ], + "spans": [ + { + "bbox": [ + 131, + 466, + 369, + 479 + ], + "type": "text", + "content": "3.2 Feature Consensus with Functional Maps" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "spans": [ + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": "An overview of our framework is depicted in Fig. 1. Given a pair of images " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": ", our setup includes two distinct pixel-wise feature extraction networks, yielding two sets of features: " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "E^{M}, E^{N}" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "F^{M}, F^{N}" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": ". For instance, " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "E^{M}" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "E^{N}" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": " might be DINOv2 features, while " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "F^{M}" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "inline_equation", + "content": "F^{N}" + }, + { + "bbox": [ + 130, + 486, + 482, + 534 + ], + "type": "text", + "content": " could be Stable Diffusion features." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "spans": [ + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": "The primary objective is to derive a functional map " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "\\mathbf{C}" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": " between the two function spaces " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(M,\\mathbb{R})" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(N,\\mathbb{R})" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": ". The core of our method involves using " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "E^{M}" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "E^{N}" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": " to calculate the Laplacian eigenfunction basis and apply " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "F^{M}" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "inline_equation", + "content": "F^{N}" + }, + { + "bbox": [ + 131, + 534, + 482, + 618 + ], + "type": "text", + "content": " for introducing regularizations in optimizing the functional map. In essence, our method optimizes the functional map derived from one set of features to achieve a \"consensus\" with the other set, providing a more comprehensive and robust mapping between the function spaces of the images." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "text", + "content": "Image Laplacian from visual features For an image feature of dimensions " + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "inline_equation", + "content": "(h, w)" + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "text", + "content": " is the height and " + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "inline_equation", + "content": "w" + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "text", + "content": " is the width, we view it as a grid graph comprising " + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "inline_equation", + "content": "h \\times w" + }, + { + "bbox": [ + 131, + 629, + 482, + 666 + ], + "type": "text", + "content": " nodes; each node is connected to its four adjacent neighbors. However, a" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 480, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 480, + 152 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 480, + 152 + ], + "type": "text", + "content": "graph constructed naively would lack awareness of the image content, and its Laplacian eigenspaces would correspond to the conventional Fourier frequency space." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "spans": [ + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": "Instead, we assign weights to the graph edges based on the first set of image features " + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "inline_equation", + "content": "E^{M}" + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "inline_equation", + "content": "E^{N}" + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": ". For two adjacent patches " + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "inline_equation", + "content": "\\mathbf{y}" + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": " in image " + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": " (a similar definition applies for " + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 130, + 152, + 481, + 188 + ], + "type": "text", + "content": "), the weight of the edge between them is given by:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 242, + 195, + 482, + 224 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 242, + 195, + 482, + 224 + ], + "spans": [ + { + "bbox": [ + 242, + 195, + 482, + 224 + ], + "type": "interline_equation", + "content": "\\| e _ {\\mathbf {x y}} \\| = \\exp \\left(- \\frac {\\| E _ {\\mathbf {x}} ^ {M} - E _ {\\mathbf {y}} ^ {M} \\|}{\\sigma}\\right), \\tag {7}", + "image_path": "b4145fdd4712a1e62cab63f6413be0aaa3191f6b0863923b42eb433d9b8f58d8.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 230, + 366, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 230, + 366, + 241 + ], + "spans": [ + { + "bbox": [ + 130, + 230, + 366, + 241 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 230, + 366, + 241 + ], + "type": "inline_equation", + "content": "\\sigma" + }, + { + "bbox": [ + 130, + 230, + 366, + 241 + ], + "type": "text", + "content": " denotes the median of all the feature values." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 242, + 481, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 242, + 481, + 302 + ], + "spans": [ + { + "bbox": [ + 130, + 242, + 481, + 302 + ], + "type": "text", + "content": "Next, we compute the graph Laplacian " + }, + { + "bbox": [ + 130, + 242, + 481, + 302 + ], + "type": "inline_equation", + "content": "\\varDelta_M" + }, + { + "bbox": [ + 130, + 242, + 481, + 302 + ], + "type": "text", + "content": " and utilize its eigenfunctions as the basis. In alignment with previous research, we adopt a reduced function space spanned by the first 200 eigenfunctions. To compute the Laplacian eigen decompositions, we employ the LOBPCG algorithm, known for its efficiency. Fig. 2 presents examples of these Laplacian eigenfunctions." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "spans": [ + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "content": "Feature as function regularizer For the second set of features " + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "inline_equation", + "content": "F^M" + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "inline_equation", + "content": "F^N" + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "content": ", we employ them as descriptor functions and impose a constraint on " + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "inline_equation", + "content": "\\mathbf{C}" + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "content": " such that " + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "inline_equation", + "content": "\\mathbf{C}F^M \\approx F^N" + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "content": ". Given the incompleteness of shape correspondences in image pairs, due for example to occlusion within the object and by other objects, we utilize the attention-based feature refinement network " + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "inline_equation", + "content": "g_{\\mathcal{R}}" + }, + { + "bbox": [ + 130, + 312, + 482, + 394 + ], + "type": "text", + "content": " from deep partial functional maps [2]. This network refines the features, which are then projected onto the function basis:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 225, + 395, + 481, + 408 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 395, + 481, + 408 + ], + "spans": [ + { + "bbox": [ + 225, + 395, + 481, + 408 + ], + "type": "interline_equation", + "content": "\\tilde {F} ^ {M} = \\varphi^ {M} g _ {\\mathcal {R}} \\left(F ^ {M}\\right), \\quad \\tilde {F} ^ {N} = \\varphi^ {N} g _ {\\mathcal {R}} \\left(F ^ {N}\\right). \\tag {8}", + "image_path": "1d2d702e416570982278b610f2cde936b6ea00e5f825781a90cc6acdf7d2edc7.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 411, + 478, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 411, + 478, + 423 + ], + "spans": [ + { + "bbox": [ + 130, + 411, + 478, + 423 + ], + "type": "text", + "content": "The descriptor-preserving loss applied to these refined features is formulated as:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 258, + 429, + 481, + 443 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 258, + 429, + 481, + 443 + ], + "spans": [ + { + "bbox": [ + 258, + 429, + 481, + 443 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {f e a t}} = \\left\\| \\mathbf {C} \\tilde {F} ^ {M} - \\tilde {F} ^ {N} \\right\\| _ {2}. \\tag {9}", + "image_path": "a931cec64c39e9873dd156a35f7c74db86d18ea57f198006861197c6e60a58c3.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 449, + 481, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 449, + 481, + 485 + ], + "spans": [ + { + "bbox": [ + 130, + 449, + 481, + 485 + ], + "type": "text", + "content": "To enhance the regularity of the functional map, our optimization objective incorporates two additional regularization terms. First, we integrate a compactness regularization into the functional map matrix:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 252, + 492, + 481, + 512 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 492, + 481, + 512 + ], + "spans": [ + { + "bbox": [ + 252, + 492, + 481, + 512 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {d i a g}} = \\left(\\left| \\lambda_ {i} ^ {M} - \\lambda_ {j} ^ {N} \\right| c _ {i j}\\right) ^ {2}, \\tag {10}", + "image_path": "fec0b231e699ccb003887d55f5cc16f391dd8e1d550fa0bec1f374c5cf1fc0e4.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "spans": [ + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\lambda_{i}^{M}" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\lambda_{j}^{N}" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": " represent the " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": "-th and " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": "-th eigenvalues of the graph Laplacian matrices " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\Delta_{M}" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\Delta_{N}" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": ", respectively. For images with similar spectral distributions of eigenvalues, minimizing " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{diag}}" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": " encourages a near-diagonal structure in " + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "inline_equation", + "content": "\\mathbf{C}" + }, + { + "bbox": [ + 130, + 518, + 482, + 602 + ], + "type": "text", + "content": ". This regularization is based on the principle that eigenvalues' magnitudes are indicative of the frequencies of their corresponding eigenfunctions, and eigenfunctions with similar frequencies are more likely to correspond, as suggested by Huang et al. [14]." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 146, + 603, + 436, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 603, + 436, + 615 + ], + "spans": [ + { + "bbox": [ + 146, + 603, + 436, + 615 + ], + "type": "text", + "content": "Next, we introduce a bijectivity constraint to the functional map:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 264, + 620, + 481, + 634 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 620, + 481, + 634 + ], + "spans": [ + { + "bbox": [ + 264, + 620, + 481, + 634 + ], + "type": "interline_equation", + "content": "\\mathbf {C} ^ {M \\rightarrow N} \\cdot \\mathbf {C} ^ {N \\rightarrow M} = \\mathbf {I}. \\tag {11}", + "image_path": "bfd7da14072ab7b7b8fc551a6802d64b8f0e21e3841b3c7cc832f2de9198c99c.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "content": "This can be interpreted as a special instance of the cycle-consistency regularization for image collections as in Wang et al. [51] when the number of images is two." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 184, + 91, + 448, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 91, + 448, + 103 + ], + "spans": [ + { + "bbox": [ + 184, + 91, + 448, + 103 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "content": "To implement this constraint, in line with Wang et al. [51], we define two sets of estimizable latent bases: " + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^M = \\{Z_i^M\\}" + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^N = \\{Z_i^N\\}" + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "content": ", corresponding to the function spaces " + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(M,\\mathbb{R})" + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(N,\\mathbb{R})" + }, + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "content": " of both source and target images. The consistency loss is then defined as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 257, + 175, + 482, + 194 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 175, + 482, + 194 + ], + "spans": [ + { + "bbox": [ + 257, + 175, + 482, + 194 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {c o n s}} = \\left\\| \\mathbf {C Z} ^ {M} - \\mathbf {Z} ^ {N} \\right\\| _ {2}. \\tag {12}", + "image_path": "38a5cc7b354671b58540d2f9828e810cc4967065b50374193a084ec11d8eda83.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "spans": [ + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "content": "To prevent degenerate solutions where " + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^M" + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^N" + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "content": " could be trivially zero, we introduce an additional constraint requiring both " + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^M" + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^N" + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "content": " to satisfy " + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^t\\mathbf{Z} = \\mathbf{I}" + }, + { + "bbox": [ + 131, + 203, + 483, + 242 + ], + "type": "text", + "content": ". Integrating all these components, our final optimization objective is:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 228, + 253, + 481, + 279 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 228, + 253, + 481, + 279 + ], + "spans": [ + { + "bbox": [ + 228, + 253, + 481, + 279 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\operatorname {a r g m i n} _ {\\mathbf {C}} \\mathcal {L} _ {\\text {f e a t}} + \\lambda_ {\\text {d i a g}} \\mathcal {L} _ {\\text {d i a g}} + \\lambda_ {\\text {c o n s}} \\mathcal {L} _ {\\text {c o n s}}, \\tag {13} \\\\ s. t. \\quad (\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {Z} ^ {M} = \\mathbf {I}, (\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {Z} ^ {N} = \\mathbf {I}. \\\\ \\end{array}", + "image_path": "e589b4de1948b577d775cf375254d9593734a3b099325258c2c072b7c3ae6d98.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "spans": [ + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "text", + "content": "Optimization We jointly optimize the weights of the image feature refinement network " + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "inline_equation", + "content": "g_{\\mathcal{R}}" + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "text", + "content": ", the functional map " + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "inline_equation", + "content": "\\mathbf{C}" + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "text", + "content": ", and the latent basis " + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{M}" + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{N}" + }, + { + "bbox": [ + 131, + 296, + 482, + 332 + ], + "type": "text", + "content": " for the input image pair. The full loss function is formulated as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 201, + 344, + 481, + 396 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 201, + 344, + 481, + 396 + ], + "spans": [ + { + "bbox": [ + 201, + 344, + 481, + 396 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} = \\mathcal {L} _ {\\mathrm {f e a t}} + \\lambda_ {\\mathrm {d i a g}} \\mathcal {L} _ {\\mathrm {d i a g}} + \\lambda_ {\\mathrm {c o n s}} \\mathcal {L} _ {\\mathrm {c o n s}} \\\\ + \\lambda_ {Z} \\left(\\operatorname {t r} \\left((\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {W} \\mathbf {Z} ^ {M}\\right) + \\operatorname {t r} \\left((\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {W} \\mathbf {Z} ^ {N}\\right)\\right) \\tag {14} \\\\ + \\lambda_ {\\mathrm {r e g}} \\left(\\left\\| (\\mathbf {Z} ^ {M}) ^ {t} \\mathbf {Z} ^ {M} - \\mathbf {I} \\right\\| _ {2} + \\left\\| (\\mathbf {Z} ^ {N}) ^ {t} \\mathbf {Z} ^ {N} - \\mathbf {I} \\right\\| _ {2}\\right), \\\\ \\end{array}", + "image_path": "a0f3c5d41b4424044c2d529553333cd86db1384726b968b7cc27b22af6f3cbab.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "spans": [ + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "inline_equation", + "content": "\\mathbf{W} = \\mathbf{I} + \\mathbf{C}^t\\mathbf{C}" + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "content": ". The terms " + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "inline_equation", + "content": "\\operatorname{tr}(\\mathbf{Z}^t\\mathbf{W}\\mathbf{Z})" + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "content": " are variations of Eq. (13) with " + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^M" + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^N" + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "content": " as the primary variables rather than " + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "inline_equation", + "content": "\\mathbf{C}" + }, + { + "bbox": [ + 130, + 406, + 482, + 432 + ], + "type": "text", + "content": ", as discussed in Wang et al. [51]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 131, + 455, + 230, + 468 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 455, + 230, + 468 + ], + "spans": [ + { + "bbox": [ + 131, + 455, + 230, + 468 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 480, + 482, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 480, + 482, + 554 + ], + "spans": [ + { + "bbox": [ + 130, + 480, + 482, + 554 + ], + "type": "text", + "content": "Dataset We evaluate our method primarily on the TSS dataset [44], comprising 400 image pairs from three subsets: FG3DCAR [20], JODS [38], and PASCAL [12], all of which include dense correspondence annotations. Additionally, we perform evaluations on the SPair-71k dataset [24], which features sparse annotations of keypoint correspondences across 18 categories. For this dataset, we sample 20 pairs from each category for our analysis, following the prior work [55]." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "content": "Baselines Our comparison primarily focuses on emergent correspondences from various visual models and feature fusion techniques. We utilize feature extraction networks such as DINOv1 (ViT-S/8), DINOv2 (ViT-S/14 and ViT-B/14), and Stable Diffusion, which are prevalent and extensively researched in a wide range of visual perception tasks. In terms of feature fusion, we benchmark against the feature concatenation approach proposed by Zhang et al. [55], testing different combinations of features. Additionally, we list other methods designed for image correspondence tasks that involve stronger supervision or task-specific designs." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 132, + 162, + 479, + 342 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 159 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 159 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 159 + ], + "type": "text", + "content": "Table 1: Results for dense correspondences on TSS [44]. The baselines are classified into three categories based on their training setups: supervised, unsupervised with task-specific designs, and zero-shot methods without task- or dataset-specific designs. * indicates backbones fine-tuned on this dataset." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 132, + 162, + 479, + 342 + ], + "lines": [ + { + "bbox": [ + 132, + 162, + 479, + 342 + ], + "spans": [ + { + "bbox": [ + 132, + 162, + 479, + 342 + ], + "type": "table", + "html": "
SettingMethodFG3DCarJODSPascalAvg.
SupervisedSCOT [23]95.381.357.778.1
CATs* [7]92.178.964.278.4
PWarpC-CATs* [49]95.585.085.588.7
Unsupervised task-specificCNNGeo [33]90.176.456.374.4
PARN [15]89.575.971.278.8
GLU-Net [46]93.273.371.179.2
Semantic-GLU-Net [48]95.382.278.285.2
Unsupervised zero-shotDINOv1-ViT-S/8 [1]68.744.736.752.7
DINOv2-ViT-B81.268.451.569.4
Stable Diffusion (SD)92.162.648.472.5
Concat. DINOv2 + SD [55]92.973.859.678.7
FMap DINOv2(basis) + DINOv2(loss)83.569.252.771.0
FMap SD(basis) + SD(loss)80.063.451.567.8
FMap DINOv2(basis) + SD(loss) (ours)84.870.453.572.2
FMap DINOv2(loss) + SD(basis) (ours)93.174.059.978.9
", + "image_path": "8c82d250d84a0f9d2f41d837e8258024344eb49759fdce5a473693427cf7de2b.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "spans": [ + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "text", + "content": "Evaluation metrics For both dense and sparse correspondences, we adopt the Percentage of Correct Keypoints (PCK) metric [53] with a threshold of " + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "inline_equation", + "content": "\\kappa \\cdot \\max(h, w)" + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "inline_equation", + "content": "\\kappa" + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "text", + "content": " is a positive integer, and " + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "inline_equation", + "content": "(h, w)" + }, + { + "bbox": [ + 130, + 357, + 483, + 477 + ], + "type": "text", + "content": " represents the image dimensions in the TSS dataset or the instance bounding-box dimensions in the SPair-71k dataset. Additionally, for dense correspondences on the TSS dataset, we assess spatial coherence using a smoothness metric [55]. This involves extracting a semantic flow (i.e., a 2D motion vector field from the source to the target image) and computing its first-order difference. In the case of sparse correspondences on the Spair-71k dataset, we further calculate the Mean Squared Error (MSE) on the keypoints to quantify mapping distortions." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 498, + 276, + 510 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 498, + 276, + 510 + ], + "spans": [ + { + "bbox": [ + 131, + 498, + 276, + 510 + ], + "type": "text", + "content": "4.1 Dense Correspondence" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 521, + 482, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 521, + 482, + 592 + ], + "spans": [ + { + "bbox": [ + 130, + 521, + 482, + 592 + ], + "type": "text", + "content": "Table 1 presents the results of dense correspondences on the TSS dataset. Following [55], we majorly compare to other zero-shot unsupervised methods, among which we achieve the best performances. Specifically, we outperform Zhang et al. [55] with the same pair of features by utilizing the features in a more structure-aware manner. We also list as references the performances of fully supervised methods and unsupervised methods with task-specific training." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "content": "We also evaluate an ablated version of our framework by computing the basis functions and losses using the same set of features (the third and fourth rows from the last), which give significantly worse results compared to our full model. On the other side, it can still give better results than directly using one feature with nearest neighbor queries (for example, FMap DINOv2(basis) + DINOv2(loss) versus DINOv2-ViT-B/14). This shows that structure-awareness" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 184, + 91, + 447, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 91, + 447, + 103 + ], + "spans": [ + { + "bbox": [ + 184, + 91, + 447, + 103 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 151, + 117, + 459, + 300 + ], + "blocks": [ + { + "bbox": [ + 151, + 117, + 459, + 300 + ], + "lines": [ + { + "bbox": [ + 151, + 117, + 459, + 300 + ], + "spans": [ + { + "bbox": [ + 151, + 117, + 459, + 300 + ], + "type": "image", + "image_path": "a119e8696e30fcd8daddfaa65976a80350bc418d0d8c63c586ebc3e00ccb69e1.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 302, + 482, + 378 + ], + "lines": [ + { + "bbox": [ + 130, + 302, + 482, + 378 + ], + "spans": [ + { + "bbox": [ + 130, + 302, + 482, + 378 + ], + "type": "text", + "content": "Fig. 3: Dense correspondences on SPair-71k [24] Image Pairs. Each example displays pixel-wise mappings from source to target images in rainbow colors (second column for source coordinates, fourth and fifth columns for computed target coordinates) and color transfers (last two columns). Specifically, we demonstrate the challenging examples including significant viewpoint changes (first and second row), shape variations (first and third row), and occlusions (third row). Our framework achieves more consistent mappings with its global structure-awareness." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 389, + 480, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 389, + 480, + 411 + ], + "spans": [ + { + "bbox": [ + 130, + 389, + 480, + 411 + ], + "type": "text", + "content": "can naturally lead to better correspondences even without introducing any additional information." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 413, + 482, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 413, + 482, + 509 + ], + "spans": [ + { + "bbox": [ + 130, + 413, + 482, + 509 + ], + "type": "text", + "content": "Fig. 3 shows the qualitative results of dense correspondences computed with the DINOv2-ViT-B/14 and Stable Diffusion networks. We compare side-by-side the feature fusion results using pre-normalized concatenation [55] and our method. In all these examples, our framework provides smoother and more consistent mappings with its global structure-awareness. Specifically, we highlight two challenging examples: the airplanes in the second row with large camera-view changes, and the birds in the third row with large shape variations as well as occlusions. We also visualize the matrices for the linear functional maps in Fig. 6." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 521, + 482, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 521, + 482, + 582 + ], + "spans": [ + { + "bbox": [ + 130, + 521, + 482, + 582 + ], + "type": "text", + "content": "Feature fusion with different networks Tab. 2 presents the accuracy and smoothness of correspondences derived from features of various network backbones. When compared to using individual features or their concatenation [55], our functional-map-based framework demonstrates superior results in both metrics across all tested configurations." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 480, + 641 + ], + "type": "text", + "content": "Feature fusion with different layers Tab. 3 presents the results of fusing features from different layers within the same network. Our experiments involve layers 9 and 11 of DINOv2-ViT-S/14 and DINOv2-ViT-B/14. In all tested setups, our framework demonstrates superior performance compared to baseline methods." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "content": "Additionally, a comparative analysis was performed on the choice of layers for DINOv2-ViT-B/14, specifically by fusing the features of layer 11 with those of" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 144, + 129, + 470, + 277 + ], + "blocks": [ + { + "bbox": [ + 182, + 114, + 430, + 125 + ], + "lines": [ + { + "bbox": [ + 182, + 114, + 430, + 125 + ], + "spans": [ + { + "bbox": [ + 182, + 114, + 430, + 125 + ], + "type": "text", + "content": "Table 2: Fusing the features from different networks." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 144, + 129, + 470, + 277 + ], + "lines": [ + { + "bbox": [ + 144, + 129, + 470, + 277 + ], + "spans": [ + { + "bbox": [ + 144, + 129, + 470, + 277 + ], + "type": "table", + "html": "
MethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv1-ViT-S/8raw53.976.846.112.90
DINOv2-ViT-S/14raw69.685.030.87.98
DINOv2-ViT-B/14raw69.487.830.910.46
Stable Diffusion (SD)raw72.583.837.56.41
DINOv1-ViT-S/8Concat. [55]69.988.131.010.33
+ DINOv2-ViT-B/14FMap (ours)72.290.327.77.95
DINOv2-ViT-S/14 + SDConcat. [55]78.189.927.56.58
FMap (ours)71.590.026.36.47
DINOv2-ViT-B/14 + SDConcat. [55]78.790.726.46.81
FMap (ours)78.991.126.15.74
", + "image_path": "313dc3431d594e05cb604b0fed688d2864f7d05633af8d8da46b1ec93d31a521.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 157, + 303, + 457, + 418 + ], + "blocks": [ + { + "bbox": [ + 141, + 288, + 471, + 300 + ], + "lines": [ + { + "bbox": [ + 141, + 288, + 471, + 300 + ], + "spans": [ + { + "bbox": [ + 141, + 288, + 471, + 300 + ], + "type": "text", + "content": "Table 3: Fusing the features from different layers of the same network." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 157, + 303, + 457, + 418 + ], + "lines": [ + { + "bbox": [ + 157, + 303, + 457, + 418 + ], + "spans": [ + { + "bbox": [ + 157, + 303, + 457, + 418 + ], + "type": "table", + "html": "
BackboneMethodPCK0.05↑PCK0.1↑EPE↓Smth.↓
DINOv2-ViT-S/14Layer967.284.836.59.64
Layer1170.888.131.09.25
Concat. [55]70.588.131.09.25
FMap (ours)70.889.129.16.60
DINOv2-ViT-B/14Layer957.285.434.510.66
Layer1169.487.830.910.46
Concat. [55]70.087.930.910.24
FMap (ours)70.689.825.98.27
", + "image_path": "aadd44f57c3b3437226295df1d8765a4b0285b63a5bda6ce51410c51fb59851e.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 433, + 482, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 433, + 482, + 517 + ], + "spans": [ + { + "bbox": [ + 130, + 433, + 482, + 517 + ], + "type": "text", + "content": "layers 8, 9, 10, and layer 11 tokens. The results, as depicted in Tab. 4, indicate that our functional map approach consistently surpasses both raw and concatenated features across all layer combinations. We also observed that greater feature distinction occurs when the two layers are more distant from each other. Our framework effectively leverages this distinction, extracting better correspondences by integrating their information. As shown in Tab. 4, optimal performance in EPE is achieved using features from layers 8 and 11." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 536, + 230, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 536, + 230, + 547 + ], + "spans": [ + { + "bbox": [ + 132, + 536, + 230, + 547 + ], + "type": "text", + "content": "4.2 More Results" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 557, + 482, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 557, + 482, + 629 + ], + "spans": [ + { + "bbox": [ + 130, + 557, + 482, + 629 + ], + "type": "text", + "content": "Keypoint correspondence Tab. 5 presents the results for sparse keypoint correspondences on SPair-71k [24]. Compared to feature concatenation [55], our method demonstrates comparable or higher PCK (with different thresholds) and exhibits lower MSE errors. Note that the selected keypoints are extremely sparse on the images, which could potentially introduce sampling biases compared to evaluations of dense correspondences." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 630, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 630, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 630, + 482, + 665 + ], + "type": "text", + "content": "Fig. 4 showcases qualitative keypoint matching results. Our method is compared side-by-side with results obtained using feature concatenation, where our approach consistently demonstrates robustness in these challenging scenarios" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 184, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 184, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 145, + 150, + 466, + 299 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": "Table 4: Results on different layer choices for feature fusion. This experiment involves DINOv2-ViT-B/14, wherein its layer 11 features (values) are fused with layers 8, 9, 10, and layer 11 tokens, respectively." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 145, + 150, + 466, + 299 + ], + "lines": [ + { + "bbox": [ + 145, + 150, + 466, + 299 + ], + "spans": [ + { + "bbox": [ + 145, + 150, + 466, + 299 + ], + "type": "table", + "html": "
MethodLayer 8Layer 9Layer 10Layer 11 token
EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓EPE↓Smth.↓
Raw [1]59.116.1056.816.0656.815.4053.313.20
Concat. [55]53.514.8055.413.9056.716.7055.316.10
FMap (ours)41.811.9545.29.5241.912.4345.310.65
Concat.
FMap (ours)
", + "image_path": "10550a948f61db57b91303769278e6fbad298c41ec4b5e040f5cd0bc001d2cea.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 130, + 300, + 480, + 324 + ], + "lines": [ + { + "bbox": [ + 130, + 300, + 480, + 324 + ], + "spans": [ + { + "bbox": [ + 130, + 300, + 480, + 324 + ], + "type": "text", + "content": "(a) Image pairs with similar geometric properties. (a) The baseline method incorrectly maps (a) the right ear of the horse to the left ear, (b) the right ear of the cow to the left ear, and (c) a point corresponding to the front feet of the horse to the hind feet." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 153, + 329, + 465, + 417 + ], + "blocks": [ + { + "bbox": [ + 153, + 329, + 465, + 417 + ], + "lines": [ + { + "bbox": [ + 153, + 329, + 465, + 417 + ], + "spans": [ + { + "bbox": [ + 153, + 329, + 465, + 417 + ], + "type": "image", + "image_path": "9ce91c64156e4470f510c87c8e5f57e33a827907c0a7d23026015ea296047627.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 418, + 480, + 443 + ], + "lines": [ + { + "bbox": [ + 130, + 418, + 480, + 443 + ], + "spans": [ + { + "bbox": [ + 130, + 418, + 480, + 443 + ], + "type": "text", + "content": "(b) Image pairs with significant differences in shapes and viewpoints. The baseline method incorrectly maps (a) all points on the pot to the plant, (b) a point on the child's ear to the woman's cheek, and (c) a point at the seat corner to another chair's armrest." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 130, + 449, + 482, + 471 + ], + "lines": [ + { + "bbox": [ + 130, + 449, + 482, + 471 + ], + "spans": [ + { + "bbox": [ + 130, + 449, + 482, + 471 + ], + "type": "text", + "content": "Fig. 4: Sparse keypoint correspondences on SPair-71k [24] image pairs. Correct matches are connected with blue lines and incorrect matches with red lines." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 486, + 482, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 486, + 482, + 545 + ], + "spans": [ + { + "bbox": [ + 130, + 486, + 482, + 545 + ], + "type": "text", + "content": "and effectively captures the geometric properties of the features. Fig. 4a further illustrates the effectiveness of our method in scenarios where the target image contains many similar points, like the legs of a horse. In contrast, the baseline struggles to capture the global structure, often leading to mappings of similar but incorrect points." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "type": "text", + "content": "Affordance transfer We further showcase an application of our method in transferring tool affordances between images from the RGB-D Part Affordance Dataset [25]. This dataset features different types of affordances annotated on each object, represented as heat maps. Fig. 5 illustrates our results in transferring these affordance heat maps. Such distributional functions across pixels pose a challenge to raw pixel-wise maps due to the potential distortion of their overall structure during interpolation. However, these functions can be naturally modeled with functional maps, as our approach demonstrates." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 205, + 140, + 410, + 208 + ], + "blocks": [ + { + "bbox": [ + 131, + 114, + 482, + 137 + ], + "lines": [ + { + "bbox": [ + 131, + 114, + 482, + 137 + ], + "spans": [ + { + "bbox": [ + 131, + 114, + 482, + 137 + ], + "type": "text", + "content": "Table 5: Results for sparse keypoint correspondences on SPair-7k [24]. All results in this experiment are with the DINOv2-ViT-B/14 backbone." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 205, + 140, + 410, + 208 + ], + "lines": [ + { + "bbox": [ + 205, + 140, + 410, + 208 + ], + "spans": [ + { + "bbox": [ + 205, + 140, + 410, + 208 + ], + "type": "table", + "html": "
MethodPCK@0.1↑PCK@0.2↑MSE↓
DINOv252.368.0105.0
Stable Diffusion51.264.1120.5
Concat. [55]57.272.297.2
FMap (ours)55.372.688.0
", + "image_path": "0f574568f5ca9b5d118bf0340380701c3a17daa676825e38452a370d923592fd.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 149, + 216, + 218, + 338 + ], + "blocks": [ + { + "bbox": [ + 149, + 216, + 218, + 338 + ], + "lines": [ + { + "bbox": [ + 149, + 216, + 218, + 338 + ], + "spans": [ + { + "bbox": [ + 149, + 216, + 218, + 338 + ], + "type": "image", + "image_path": "f7405dbb95a23c27cd1304dc54112e4f0bb28731c4800b11ebf3c6283602f64f.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 342, + 482, + 407 + ], + "lines": [ + { + "bbox": [ + 130, + 342, + 482, + 407 + ], + "spans": [ + { + "bbox": [ + 130, + 342, + 482, + 407 + ], + "type": "text", + "content": "Fig. 5: Transferring tool affordances represented as heat maps. We treat affordance heat maps as functions defined on the source and the target image. By optimizing the functional map between the source and the target, we manage to transfer the function after applying the functional map to it directly following Eq. (1). We employ features from DINOV2-ViT-B/14 and Stable Diffusion to compute the functional maps in this experiment." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 229, + 216, + 299, + 338 + ], + "blocks": [ + { + "bbox": [ + 229, + 216, + 299, + 338 + ], + "lines": [ + { + "bbox": [ + 229, + 216, + 299, + 338 + ], + "spans": [ + { + "bbox": [ + 229, + 216, + 299, + 338 + ], + "type": "image", + "image_path": "4c7dc59a23d5c3244e219f611a09a5fcc497b29752c3fd2440d5803b07210426.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 311, + 216, + 380, + 338 + ], + "blocks": [ + { + "bbox": [ + 311, + 216, + 380, + 338 + ], + "lines": [ + { + "bbox": [ + 311, + 216, + 380, + 338 + ], + "spans": [ + { + "bbox": [ + 311, + 216, + 380, + 338 + ], + "type": "image", + "image_path": "e96b43e66ab5f38b47ef38162204ed0b775c87c9a44e4f8c07f8e3d5a4a775ef.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 392, + 216, + 462, + 338 + ], + "blocks": [ + { + "bbox": [ + 392, + 216, + 462, + 338 + ], + "lines": [ + { + "bbox": [ + 392, + 216, + 462, + 338 + ], + "spans": [ + { + "bbox": [ + 392, + 216, + 462, + 338 + ], + "type": "image", + "image_path": "1527ee56ddff30182621917b926cd5b735e57f5d3c6f215147d24eb439b75f97.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 425, + 482, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 425, + 482, + 510 + ], + "spans": [ + { + "bbox": [ + 130, + 425, + 482, + 510 + ], + "type": "text", + "content": "Ablation Studies In addition to the feature ablations shown in Tab. 1 and discussed in Sec. 4.1, we also present an ablation on the regularization terms for the functional map optimization. Tab. 6 shows the results optimized with different regularization losses. The diagonality and consistency regularizations greatly improve the accuracy of the mapping. Fig. 6 visualizes the functional map matrices with and without the regularizations. The near-diagonal mappings are preferred because they match the function basis with similar frequencies." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 532, + 222, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 532, + 222, + 544 + ], + "spans": [ + { + "bbox": [ + 132, + 532, + 222, + 544 + ], + "type": "text", + "content": "5 Discussions" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 556, + 482, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 556, + 482, + 628 + ], + "spans": [ + { + "bbox": [ + 130, + 556, + 482, + 628 + ], + "type": "text", + "content": "As shown in Sec. 4.1, our functional map framework effectively integrates features from different network layers. This integration, particularly from just two distinct layers, outperforms the conventional approach of using same-layer features or naively concatenating different features. This finding opens up promising avenues for enhancing the generalization capabilities of large-scale vision models without additional fine-tuning." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 629, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 629, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 629, + 482, + 666 + ], + "type": "text", + "content": "Moreover, the interpretability of learned features in the functional map framework is crucial, particularly in domains like medical imaging or autonomous systems. Our approach, as shown in Fig. 3, enables impressive image editing" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 184, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 184, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 152, + 118, + 462, + 244 + ], + "blocks": [ + { + "bbox": [ + 152, + 118, + 462, + 244 + ], + "lines": [ + { + "bbox": [ + 152, + 118, + 462, + 244 + ], + "spans": [ + { + "bbox": [ + 152, + 118, + 462, + 244 + ], + "type": "image", + "image_path": "a99b7b97f8593675eee2b24c2d04422142738a02bb82bf246f76123d0cc59b55.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 131, + 246, + 482, + 280 + ], + "lines": [ + { + "bbox": [ + 131, + 246, + 482, + 280 + ], + "spans": [ + { + "bbox": [ + 131, + 246, + 482, + 280 + ], + "type": "text", + "content": "Fig. 6: Functional map matrices with and without regularization losses. Enforcing the compactness loss (Eq. (10)) centers the non-zero matrix entries around the diagonals to match the function basis of similar frequencies." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 174, + 310, + 438, + 378 + ], + "blocks": [ + { + "bbox": [ + 131, + 285, + 483, + 308 + ], + "lines": [ + { + "bbox": [ + 131, + 285, + 483, + 308 + ], + "spans": [ + { + "bbox": [ + 131, + 285, + 483, + 308 + ], + "type": "text", + "content": "Table 6: Ablation on the loss terms. All results in the experiment are with DINOv2-ViT-B/14 and Stable Diffusion on the SPair-71k dataset." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 174, + 310, + 438, + 378 + ], + "lines": [ + { + "bbox": [ + 174, + 310, + 438, + 378 + ], + "spans": [ + { + "bbox": [ + 174, + 310, + 438, + 378 + ], + "type": "table", + "html": "
LossPCK@0.1↑PCK@0.2↑MSE↓
Lfeat (no regularization)44.665.595.3
Lfeat + Ldiag52.969.597.9
Lfeat + Lcons52.869.7100.3
Lfeat + Ldiag + Lcons (full loss)55.372.688.0
", + "image_path": "389e9131e63c93c33c7aaadb8317ed448d7e5e8e5f6411c173d1eeb0a7e8b122.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 389, + 482, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 389, + 482, + 415 + ], + "spans": [ + { + "bbox": [ + 130, + 389, + 482, + 415 + ], + "type": "text", + "content": "outcomes without generative models. This leads to the intriguing possibility of combining our method with generative models to enhance image quality." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 131, + 425, + 227, + 437 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 425, + 227, + 437 + ], + "spans": [ + { + "bbox": [ + 131, + 425, + 227, + 437 + ], + "type": "text", + "content": "6 Conclusions" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 440, + 482, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 440, + 482, + 571 + ], + "spans": [ + { + "bbox": [ + 130, + 440, + 482, + 571 + ], + "type": "text", + "content": "The emergence of correspondences from large-scale vision models not explicitly trained for this task is noteworthy. While nearest-neighbor analyses provide a direct exploration, they overlook the structure inherent not only in the image contents but also in the model features. Our work leverages this embedded structure via functional maps, aiming to generate point-wise accurate and globally coherent correspondences. Despite its simplicity, it significantly enhances the matching results with zero-shot inference on image pairs without additional supervision or task-specific training. While the core concepts of our approach are rooted in 3D shape correspondence literature from graphics [30], our implementation using deep feature-based functional maps bridges this area with cutting-edge vision research." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "type": "text", + "content": "Limitations and future work The structure-awareness of functional maps relies on the manifold assumption of its underlying domain, making our current framework more suitable for object-centric images than complex scenes with diverse compositionalities. Examples of the latter include matching a horse to a herd of horses or matching two indoor scenes. However, this issue might be addressed using additional image segmentation techniques that decompose the image into objects and parts, or by exploring matches between quotient spaces." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 114, + 197, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 114, + 197, + 126 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 197, + 126 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 137, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 138, + 137, + 480, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 137, + 480, + 159 + ], + "spans": [ + { + "bbox": [ + 138, + 137, + 480, + 159 + ], + "type": "text", + "content": "1. Amir, S., Gandelsman, Y., Bagon, S., Dekel, T.: Deep vit features as dense visual descriptors. arXiv preprint arXiv:2112.05814 2(3), 4 (2021)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 160, + 474, + 170 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 160, + 474, + 170 + ], + "spans": [ + { + "bbox": [ + 138, + 160, + 474, + 170 + ], + "type": "text", + "content": "2. Attaiki, S., Pai, G., Ovsjanikov, M.: Dpfm: Deep partial functional maps (2021)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 171, + 480, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 171, + 480, + 191 + ], + "spans": [ + { + "bbox": [ + 138, + 171, + 480, + 191 + ], + "type": "text", + "content": "3. Aubry, M., Schlickewei, U., Cremers, D.: The wave kernel signature: A quantum mechanical approach to shape analysis. In: ICCV Workshops (2011)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 192, + 480, + 213 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 192, + 480, + 213 + ], + "spans": [ + { + "bbox": [ + 138, + 192, + 480, + 213 + ], + "type": "text", + "content": "4. Burghard, O., Dieckmann, A., Klein, R.: Embedding shapes with green's functions for global shape matching. Computers & Graphics 68, 1-10 (2017)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 213, + 480, + 224 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 213, + 480, + 224 + ], + "spans": [ + { + "bbox": [ + 138, + 213, + 480, + 224 + ], + "type": "text", + "content": "5. Cao, D., Bernard, F.: Unsupervised deep multi-shape matching. In: ECCV (2022)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 224, + 481, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 224, + 481, + 245 + ], + "spans": [ + { + "bbox": [ + 138, + 224, + 481, + 245 + ], + "type": "text", + "content": "6. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: ICCV (2021)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 246, + 480, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 246, + 480, + 277 + ], + "spans": [ + { + "bbox": [ + 138, + 246, + 480, + 277 + ], + "type": "text", + "content": "7. Cho, S., Hong, S., Jeon, S., Lee, Y., Sohn, K., Kim, S.: Cats: Cost aggregation transformers for visual correspondence. Advances in Neural Information Processing Systems 34, 9011-9023 (2021)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 277, + 480, + 299 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 277, + 480, + 299 + ], + "spans": [ + { + "bbox": [ + 138, + 277, + 480, + 299 + ], + "type": "text", + "content": "8. Donati, N., Corman, E., Ovsjanikov, M.: Deep orientation-aware functional maps: Tackling symmetry issues in shape matching. In: CVPR (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 300, + 480, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 300, + 480, + 331 + ], + "spans": [ + { + "bbox": [ + 138, + 300, + 480, + 331 + ], + "type": "text", + "content": "9. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T.: D2-net: A trainable cnn for joint description and detection of local features. In: CVPR (2019)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 332, + 480, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 332, + 480, + 363 + ], + "spans": [ + { + "bbox": [ + 138, + 332, + 480, + 363 + ], + "type": "text", + "content": "10. Gupta, K., Jampani, V., Esteves, C., Shrivastava, A., Makadia, A., Snavely, N., Kar, A.: ASIC: Aligning sparse in-the-wild image collections. arXiv preprint arXiv:2303.16201 (2023)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 364, + 480, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 364, + 480, + 385 + ], + "spans": [ + { + "bbox": [ + 138, + 364, + 480, + 385 + ], + "type": "text", + "content": "1. Halimi, O., Litany, O., Rodola, E., Bronstein, A.M., Kimmel, R.: Unsupervised learning of dense shape correspondence. In: CVPR (2019)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 386, + 480, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 386, + 480, + 406 + ], + "spans": [ + { + "bbox": [ + 138, + 386, + 480, + 406 + ], + "type": "text", + "content": "2. Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 407, + 480, + 438 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 407, + 480, + 438 + ], + "spans": [ + { + "bbox": [ + 138, + 407, + 480, + 438 + ], + "type": "text", + "content": "3. Hedlin, E., Sharma, G., Mahajan, S., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: Unsupervised semantic correspondence using stable diffusion. arXiv preprint arXiv:2305.15581 (2023)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 440, + 480, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 440, + 480, + 460 + ], + "spans": [ + { + "bbox": [ + 138, + 440, + 480, + 460 + ], + "type": "text", + "content": "4. Huang, Q., Wang, F., Guibas, L.: Functional map networks for analyzing and exploring large shape collections. ACM TOG 33(4), 1-11 (2014)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 461, + 480, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 461, + 480, + 482 + ], + "spans": [ + { + "bbox": [ + 138, + 461, + 480, + 482 + ], + "type": "text", + "content": "5. Jeon, S., Kim, S., Min, D., Sohn, K.: Parn: Pyramidal affine regression networks for dense semantic correspondence. In: ECCV (2018)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 483, + 480, + 503 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 483, + 480, + 503 + ], + "spans": [ + { + "bbox": [ + 138, + 483, + 480, + 503 + ], + "type": "text", + "content": "6. Kim, S., Lin, S., Jeon, S.R., Min, D., Sohn, K.: Recurrent transformer networks for semantic correspondence (2018)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 504, + 480, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 504, + 480, + 525 + ], + "spans": [ + { + "bbox": [ + 138, + 504, + 480, + 525 + ], + "type": "text", + "content": "7. Kovnatsky, A., Bronstein, M.M., Bronstein, A.M., Glashoff, K., Kimmel, R.: Coupled quasi-harmonic bases. In: Comput. Graph. Forum (2013)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 138, + 525, + 480, + 546 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 525, + 480, + 546 + ], + "spans": [ + { + "bbox": [ + 138, + 525, + 480, + 546 + ], + "type": "text", + "content": "8. Learned-Miller, E.G.: Data driven image models through continuous joint alignment IEEE TPAMI 28(2), 236-250 (2005)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 138, + 547, + 480, + 568 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 547, + 480, + 568 + ], + "spans": [ + { + "bbox": [ + 138, + 547, + 480, + 568 + ], + "type": "text", + "content": "9. Li, L., Donati, N., Ovsjanikov, M.: Learning multi-resolution functional maps with spectral attention for robust shape matching (2022)" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 138, + 569, + 480, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 569, + 480, + 589 + ], + "spans": [ + { + "bbox": [ + 138, + 569, + 480, + 589 + ], + "type": "text", + "content": "20. Lin, Y.L., Morariu, V.I., Hsu, W., Davis, L.S.: Jointly optimizing 3d model fitting and fine-grained classification. In: ECCV (2014)" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 138, + 590, + 480, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 590, + 480, + 611 + ], + "spans": [ + { + "bbox": [ + 138, + 590, + 480, + 611 + ], + "type": "text", + "content": "21. Litany, O., Remez, T., Rodola, E., Bronstein, A., Bronstein, M.: Deep functional maps: Structured prediction for dense shape correspondence. In: ICCV (2017)" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 138, + 612, + 480, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 612, + 480, + 632 + ], + "spans": [ + { + "bbox": [ + 138, + 612, + 480, + 632 + ], + "type": "text", + "content": "22. Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. IEEE TPAMI 33(5), 978-994 (2010)" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 138, + 633, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 633, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 633, + 480, + 665 + ], + "type": "text", + "content": "23. Liu, Y., Zhu, L., Yamada, M., Yang, Y.: Semantic correspondence as an optimal transport problem. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4463-4472 (2020)" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 117, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 132, + 117, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 117, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 132, + 117, + 480, + 138 + ], + "type": "text", + "content": "24. Min, J., Lee, J., Ponce, J., Cho, M.: Spair-71k: A large-scale benchmark for semantic correspondence. arXiv preprint arXiv:1908.10543 (2019)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 140, + 480, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 140, + 480, + 160 + ], + "spans": [ + { + "bbox": [ + 132, + 140, + 480, + 160 + ], + "type": "text", + "content": "25. Myers, A., Teo, C.L., Fermüller, C., Aloimonos, Y.: Affordance detection of tool parts from geometric features (2015)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 162, + 480, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 162, + 480, + 182 + ], + "spans": [ + { + "bbox": [ + 133, + 162, + 480, + 182 + ], + "type": "text", + "content": "26. Nogneng, D., Ovsjanikov, M.: Informative descriptor preservation via commutativity for shape matching. In: Comput. Graph. Forum (2017)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 184, + 480, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 184, + 480, + 205 + ], + "spans": [ + { + "bbox": [ + 133, + 184, + 480, + 205 + ], + "type": "text", + "content": "27. Ofri-Amar, D., Geyer, M., Kasten, Y., Dekel, T.: Neural congealing: Aligning images to a joint semantic atlas. In: CVPR (2023)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 133, + 206, + 480, + 225 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 206, + 480, + 225 + ], + "spans": [ + { + "bbox": [ + 133, + 206, + 480, + 225 + ], + "type": "text", + "content": "28. Ono, Y., Trulls, E., Fua, P., Yi, K.M.: Lf-net: Learning local features from images (2018)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 133, + 227, + 480, + 258 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 227, + 480, + 258 + ], + "spans": [ + { + "bbox": [ + 133, + 227, + 480, + 258 + ], + "type": "text", + "content": "29. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 133, + 260, + 480, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 260, + 480, + 292 + ], + "spans": [ + { + "bbox": [ + 133, + 260, + 480, + 292 + ], + "type": "text", + "content": "30. Ovsjanikov, M., Ben-Chen, M., Solomon, J., Butscher, A., Guibas, L.: Functional maps: a flexible representation of maps between shapes. ACM TOG 31(4), 1-11 (2012)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 133, + 293, + 480, + 314 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 293, + 480, + 314 + ], + "spans": [ + { + "bbox": [ + 133, + 293, + 480, + 314 + ], + "type": "text", + "content": "31. Peebles, W., Zhu, J.Y., Zhang, R., Torralba, A., Efros, A.A., Shechtman, E.: Gan-supervised dense visual alignment. In: CVPR (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 133, + 316, + 480, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 316, + 480, + 335 + ], + "spans": [ + { + "bbox": [ + 133, + 316, + 480, + 335 + ], + "type": "text", + "content": "32. Revaud, J., De Souza, C., Humenberger, M., Weinzaepfel, P.: R2d2: Reliable and repeatable detector and descriptor (2019)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 133, + 337, + 480, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 337, + 480, + 357 + ], + "spans": [ + { + "bbox": [ + 133, + 337, + 480, + 357 + ], + "type": "text", + "content": "33. Rocco, I., Arandjelovic, R., Sivic, J.: Convolutional neural network architecture for geometric matching. In: CVPR (2017)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 133, + 359, + 480, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 359, + 480, + 379 + ], + "spans": [ + { + "bbox": [ + 133, + 359, + 480, + 379 + ], + "type": "text", + "content": "34. Rocco, I., Arandjelovic, R., Sivic, J.: End-to-end weakly-supervised semantic alignment. In: CVPR (2018)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 133, + 380, + 480, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 380, + 480, + 402 + ], + "spans": [ + { + "bbox": [ + 133, + 380, + 480, + 402 + ], + "type": "text", + "content": "35. Rodola, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial functional correspondence. In: Comput. Graph. Forum (2017)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 133, + 403, + 480, + 424 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 403, + 480, + 424 + ], + "spans": [ + { + "bbox": [ + 133, + 403, + 480, + 424 + ], + "type": "text", + "content": "36. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 133, + 425, + 480, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 425, + 480, + 445 + ], + "spans": [ + { + "bbox": [ + 133, + 425, + 480, + 445 + ], + "type": "text", + "content": "37. Roufosse, J.M., Sharma, A., Ovsjanikov, M.: Unsupervised deep learning for structured shape matching. In: ICCV (2019)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 133, + 447, + 480, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 447, + 480, + 468 + ], + "spans": [ + { + "bbox": [ + 133, + 447, + 480, + 468 + ], + "type": "text", + "content": "38. Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: CVPR (2013)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 133, + 469, + 480, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 469, + 480, + 490 + ], + "spans": [ + { + "bbox": [ + 133, + 469, + 480, + 490 + ], + "type": "text", + "content": "39. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: Learning feature matching with graph neural networks. In: CVPR (2020)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 133, + 491, + 480, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 491, + 480, + 511 + ], + "spans": [ + { + "bbox": [ + 133, + 491, + 480, + 511 + ], + "type": "text", + "content": "40. Seo, P.H., Lee, J., Jung, D., Han, B., Cho, M.: Attentive semantic alignment with offset-aware correlation kernels. In: ECCV (2018)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 133, + 513, + 480, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 513, + 480, + 533 + ], + "spans": [ + { + "bbox": [ + 133, + 513, + 480, + 533 + ], + "type": "text", + "content": "41. Sharp, N., Attaiki, S., Crane, K., Ovsjanikov, M.: Diffusionnet: Discretization agnostic learning on surfaces. ACM TOG 41(3), 1-16 (2022)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 133, + 535, + 480, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 535, + 480, + 555 + ], + "spans": [ + { + "bbox": [ + 133, + 535, + 480, + 555 + ], + "type": "text", + "content": "42. Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Comput. Graph. Forum (2009)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 133, + 557, + 480, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 557, + 480, + 578 + ], + "spans": [ + { + "bbox": [ + 133, + 557, + 480, + 578 + ], + "type": "text", + "content": "43. Tang, L., Jia, M., Wang, Q., Phoo, C.P., Hariharan, B.: Emergent correspondence from image diffusion. arXiv preprint arXiv:2306.03881 (2023)" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 133, + 578, + 480, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 578, + 480, + 599 + ], + "spans": [ + { + "bbox": [ + 133, + 578, + 480, + 599 + ], + "type": "text", + "content": "44. Taniai, T., Sinha, S.N., Sato, Y.: Joint recovery of dense correspondence and cosegmentation in two images. In: CVPR (2016)" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 133, + 601, + 480, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 601, + 480, + 621 + ], + "spans": [ + { + "bbox": [ + 133, + 601, + 480, + 621 + ], + "type": "text", + "content": "45. Truong, P., Danelljan, M., Gool, L.V., Timofte, R.: Gocor: Bringing globally optimized correspondence volumes into your neural network (2020)" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 133, + 623, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 623, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 133, + 623, + 480, + 643 + ], + "type": "text", + "content": "46. Truong, P., Danelljan, M., Timofte, R.: Glu-net: Global-local universal network for dense flow and correspondences. In: CVPR (2020)" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 133, + 644, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 644, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 644, + 480, + 665 + ], + "type": "text", + "content": "47. Truong, P., Danelljan, M., Van Gool, L., Timofte, R.: Learning accurate dense correspondences and when to trust them. In: CVPR (2021)" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 217, + 102 + ], + "type": "text", + "content": "Cheng et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 304 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 480, + 138 + ], + "type": "text", + "content": "48. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Warp consistency for unsupervised learning of dense correspondences. In: ICCV (2021)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 138, + 480, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 138, + 480, + 160 + ], + "spans": [ + { + "bbox": [ + 132, + 138, + 480, + 160 + ], + "type": "text", + "content": "49. Truong, P., Danelljan, M., Yu, F., Van Gool, L.: Probabilistic warp consistency for weakly-supervised semantic correspondences. In: CVPR (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 160, + 480, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 160, + 480, + 182 + ], + "spans": [ + { + "bbox": [ + 132, + 160, + 480, + 182 + ], + "type": "text", + "content": "50. Tyszkiiewicz, M., Fua, P., Trulls, E.: Disk: Learning local features with policy gradient (2020)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 182, + 480, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 182, + 480, + 205 + ], + "spans": [ + { + "bbox": [ + 132, + 182, + 480, + 205 + ], + "type": "text", + "content": "51. Wang, F., Huang, Q., Guibas, L.J.: Image co-segmentation via consistent functional maps. In: ICCV (2013)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 205, + 480, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 205, + 480, + 226 + ], + "spans": [ + { + "bbox": [ + 132, + 205, + 480, + 226 + ], + "type": "text", + "content": "52. Wang, F., Huang, Q., Ovsjanikov, M., Guibas, L.J.: Unsupervised multi-class joint image segmentation. In: CVPR (2014)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 226, + 480, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 226, + 480, + 248 + ], + "spans": [ + { + "bbox": [ + 132, + 226, + 480, + 248 + ], + "type": "text", + "content": "53. Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE TPAMI 35(12), 2878-2890 (2012)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 248, + 480, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 248, + 480, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 248, + 480, + 270 + ], + "type": "text", + "content": "54. Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: Lift: Learned invariant feature transform In: ECCV (2016)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 270, + 480, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 270, + 480, + 304 + ], + "spans": [ + { + "bbox": [ + 132, + 270, + 480, + 304 + ], + "type": "text", + "content": "55. Zhang, J., Herrmann, C., Hur, J., Cabrera, L.P., Jampani, V., Sun, D., Yang, M.H.: A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence. arXiv preprint arXiv:2305.15347 (2023)" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 185, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Image Feature Consensus with Deep Functional Maps" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_content_list.json b/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ab864b41e4cfefb7ee6e5b4e7bdb605ad88495f5 --- /dev/null +++ b/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_content_list.json @@ -0,0 +1,2101 @@ +[ + { + "type": "text", + "text": "Zero-Shot Multi-Object Scene Completion", + "text_level": 1, + "bbox": [ + 259, + 141, + 741, + 162 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Shun Iwase $^{1,2}$ , Katherine Liu $^{2}$ , Vitor Guizilini $^{2}$ , Adrien Gaidon $^{2}$ , Kris Kitani $^{1,\\star}$ , Rares Ambrus $^{2,\\star}$ , and Sergey Zakharov $^{2,\\star}$", + "bbox": [ + 267, + 189, + 733, + 220 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1 Carnegie Mellon University", + "$^{2}$ Toyota Research Institute" + ], + "bbox": [ + 401, + 232, + 599, + 260 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/94a420cf1808dd372b6b02b11ac2ae0db122c5606ced637ce65257c7c364fd75.jpg", + "image_caption": [ + "Fronr View" + ], + "image_footnote": [], + "bbox": [ + 217, + 295, + 305, + 349 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/27744617545a63b0e9081ea48a5034b506831f788138e5d5ecbbbd5303bdda21.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 323, + 296, + 413, + 352 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/c6ec8197a39ff9c9034c6c3ac15898c7aebbbcfd5cab7262a1c76cf46c44e041.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 413, + 297, + 500, + 351 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b5f7ca3cdd94a16ad1d9bd92d76dd19e8baf495848a9d399e6b2b7d0934d137a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 295, + 591, + 348 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b8e9ef344ae3431f3ca52200e140a0872b2d61ad27673b321a0d0255be896c79.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 594, + 299, + 687, + 349 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/1e772d8508331de254e6ab1bcea06516f9f1a9e016958fd814dc1e11af54f086.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 692, + 297, + 779, + 349 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/cd8ec304519fbb25a27eabfdf46d78a28be15dacfdb2c29a76e895bd927ca750.jpg", + "image_caption": [ + "RGB-D Image", + "Fig. 1: Given an RGB-D image and the foreground mask of multiple objects not seen during training, our method predicts their complete 3D shapes quickly and accurately, including occluded areas. (Left) Synthetic image results. (Right) Zero-shot generalization to a real-world image of household objects with noisy depth data. Our 3D results are rotated with respect to the input to highlight completions in occluded regions." + ], + "image_footnote": [], + "bbox": [ + 217, + 354, + 305, + 407 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/53ebc4921a5773cb85a71908e98f9b4b11808f0b94329d211b00b476febebabe.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 308, + 362, + 413, + 406 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/8db3e66a586be6def3c098d4d27244b7af8ea77e82d963a1ce7a617aae02824f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 413, + 361, + 500, + 407 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/90e5f82e917c60fb23766ccfe9f170c45a30748dbc62fb797b09d94bec19c1f8.jpg", + "image_caption": [ + "Bae" + ], + "image_footnote": [], + "bbox": [ + 506, + 354, + 591, + 407 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/7da9167f0d2c15e995ac8f5dc7b1e35cfbc96f18458831f2f64a9768a3608345.jpg", + "image_caption": [ + "Completed 3D Shape" + ], + "image_footnote": [], + "bbox": [ + 612, + 353, + 678, + 407 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/7bf33584df2eb6171c35fc5acc6a5c34a3690bdb079222da6ffd289860951336.jpg", + "image_caption": [ + "Ground-Truth" + ], + "image_footnote": [], + "bbox": [ + 700, + 352, + 767, + 407 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image. Despite notable advancements in single-object 3D shape completion, high-quality reconstructions in highly cluttered real-world multi-object scenes remains a challenge. To address this issue, we propose OctMAE, an architecture that leverages an Octree U-Net and a latent 3D MAE to achieve high-quality and near real-time multi-object scene completion through both local and global geometric reasoning. Because a naive 3D MAE can be computationally intractable and memory intensive even in the latent space, we introduce a novel occlusion masking strategy and adopt 3D rotary embeddings, which significantly improve the runtime and scene completion quality. To generalize to a wide range of objects in diverse scenes, we create a large-scale photorealistic dataset, featuring a diverse set of 12K 3D object models from the Objaverse dataset that are rendered in multi-object scenes with physics-based positioning. Our method outperforms the current state-of-the-art on both synthetic and real-world datasets and demonstrates a strong zero-shot capability. https://sh8.io/#/oct_mae", + "bbox": [ + 261, + 535, + 740, + 785 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Equal advising.", + "bbox": [ + 220, + 825, + 336, + 839 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/83a8d659290df93065f3d0a08b3edc2f16723f2b5fb98b0b9732e1bd20667dbb.jpg", + "image_caption": [ + "Fig. 2: Overview of our proposed method (OctMAE). Given an input RGB Image $\\mathbf{I}$ , depth map $\\mathbf{D}$ , and a foreground mask $\\mathbf{M}$ , the octree feature $\\mathbf{F}$ is obtained by unprojecting an image feature encoded by a pre-trained image encoder $\\mathbf{E}$ . The octree feature is then encoded by the Octree encoder and downsampled to the Level of Detail (LoD) of 5. The notation LoD- $h$ indicates that each axis of the voxel grid has resolution of $2^h$ . The latent 3D MAE takes the encoded Octree feature $\\mathbf{F}$ as input and its output feature is concatenated with the occlusion mask tokens $\\mathbf{T}$ . Next, the masked decoded feature $\\mathbf{F}_{ML}$ is computed by sparse 3D MAE decoder. Finally, the Octree decoder predicts a completed surface at LoD-9." + ], + "image_footnote": [], + "bbox": [ + 225, + 147, + 782, + 247 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 215, + 412, + 375, + 428 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Humans can instantly imagine complete shapes of multiple novel objects in a cluttered scene via advanced geometric and semantic reasoning. This ability is also essential for robots if they are to effectively perform useful tasks in the real world [26, 27, 46, 60]. In this work, we propose a method that can quickly and accurately complete a wide number of objects in diverse real-world scenes.", + "bbox": [ + 212, + 446, + 785, + 521 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Prior works [31, 34, 36, 43, 47, 71] have achieved phenomenal progress in scene and object shape completion from a single RGB-D image. Object-centric methods [17, 25] in particular can achieve very high reconstruction accuracy by relying on category-specific shape priors. However, when deployed on entire scenes such methods require bespoke instance detection/segmentation models, and often perform test-time optimization which is time consuming and would hinder real-time deployment on a robot. Moreover, existing methods are typically limited to a small set of categories. Thus, zero-shot multi-object scene completion remains a challenging and open problem that has seen little success to date. This is in stark contrast to the sudden increase in powerful algorithms for 2D computer vision tasks such as object detection [33, 75] and image segmentation [35, 70]. We attribute this progress to a great extent to the availability of large-scale datasets [8, 54] coupled with neural architectures and learning objectives [22, 50, 53, 57] that can effectively exploit the highly structured data occurring in the natural world [20].", + "bbox": [ + 212, + 522, + 787, + 750 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Taking inspiration from the latest developments in the 2D domain, we propose a scene completion algorithm at the scene level that generalizes across a large number of shapes and that only supposes an RGB-D image and foreground mask as input. Our method consists of Octree masked autoencoders (OctMAE) — a hybrid architecture of Octree U-Net and a latent 3D MAE (Figure 2). Although a recent work, VoxFormer [34], also extends MAE architecture to 3D", + "bbox": [ + 212, + 750, + 787, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "using deformable 3D attention and shows great improvement in semantic scene completion tasks, its memory utilization is still prohibitive to handle a higher resolution voxel grid. We address this issue by integrating 3D MAE into the latent space of Octree U-Net. Our experiments show that the latent 3D MAE is the key to global structure understanding and leads to strong performance and generalization across all datasets. Moreover, we find that the choice of a masking strategy and 3D positional embeddings is crucial to achieve better performance. We provide extensive ablations to verify that our 3D latent MAE design is effective.", + "bbox": [ + 212, + 146, + 787, + 282 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our second contribution consists of the creation of a novel synthetic dataset to counteract the lack of large-scale and diverse 3D datasets. The dataset contains 12K 3D models of hand-held objects from Objaverse [12] and GSO [16] datasets (Figure 3). We utilize the dataset to conduct a comprehensive evaluation of our method as well as other baselines and show that our method scales and achieves better results. Finally, we perform zero-shot evaluations on synthetic as well as real datasets and show that a combination of 3D diversity coupled with an appropriate architecture is key to generalizable scene completion in the wild.", + "bbox": [ + 212, + 282, + 787, + 402 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our contributions can be summarized as follows:", + "bbox": [ + 238, + 402, + 591, + 417 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We present a novel network architecture, Octree Masked Autoencoders (OctMAE), a hybrid architecture of Octree U-Net and latent 3D MAE, which achieves state-of-the-art results on all the benchmarks. Further, we introduce a simple occlusion masking strategy with full attention, which boosts the performance of a latent 3D MAE.", + "bbox": [ + 225, + 426, + 785, + 501 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We create the first large-scale and diverse synthetic dataset using Objaverse [12] dataset for zero-shot multi-object scene completion, and provide a wide range of benchmark and analysis.", + "bbox": [ + 225, + 502, + 785, + 547 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 215, + 568, + 387, + 584 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3D reconstruction and completion. Reconstructing indoor scenes and objects from a noisy point cloud has been widely explored [1, 2, 4, 6, 9, 10, 23, 24, 34, 40, 42, 47, 48, 56, 65, 66]. Several works [4, 5, 43, 44, 47, 58, 60, 63, 71, 72, 74, 76] tackle more challenging shape completion tasks where large parts of a target is missing. While these methods achieve impressive results, they do not explicitly consider semantic information, which may limit their capability for accurate shape completion. Recent methods [31, 32, 34, 76] in Semantic Scene Completion (SSC) leverage semantic information via an RGB image. Nevertheless, the number of target categories is quite limited, restricting its utility for a broad range of applications in the real world. In addition, many methods adopt occupancy or SDF as an output representation, which necessitates post-processing such as the marching cubes [41] and sphere tracing to extract an explicit surface. As another direction, GeNVS [3], Zero-1-to-3 [39], and 3DiM [64] explore single-view 3D reconstruction via novel view synthesis. However, expensive test-time optimization is required. Recently, One-2-3-45 [38] and MCC [66] attempt to improve the generation speed, however, their runtime for multi-object scenes is still far from near", + "bbox": [ + 212, + 598, + 787, + 843 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "real-time. Further, since these methods are object-centric, multiple objects in a single scene are not handled well due to the complicated geometric reasoning especially caused by occlusions by other objects. In this paper, we propose a general and near real-time framework for multi-object 3D scene completion in the wild using only an RGB-D image and foreground mask without expensive test-time optimization.", + "bbox": [ + 217, + 146, + 785, + 234 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Implicit 3D representations. Recently, various types of implicit 3D representation have become popular in 3D reconstruction and completion tasks. Early works [18,42,47] use a one-dimensional latent feature to represent a 3D shape as occupancy and SDF fields. Several works [31,48,58] employ voxels, groundplanes, and triplanes, demonstrating that the retention of geometric information using 3D CNNs enhances performance. Although the voxel representation typically performs well among these three, its cubic memory and computational costs make increasing resolution challenging. To mitigate this issue, sparse voxels [6,21,37,55,62] treat a 3D representation as a sparse set of structured points using the octree and hash table and perform convolutions only on non-empty voxels and its neighbors. Further, the high-resolution sparse voxel enables a direct prediction of a target surface. As another direction, [1,67,77] leverage point cloud. Nonetheless, an unstructured set of points can be non-uniformly distributed in the 3D space and requires running the k-NN algorithm at every operation. This aspect often renders point-based methods less appealing compared to the sparse voxel representation. Therefore, our method adopts an octree-based representation used in [62] for efficient training and direct surface prediction.", + "bbox": [ + 217, + 253, + 785, + 523 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Masked Autoencoders (MAE). Inspired by the success of ViTs [15, 73] and masked language modeling [14, 51], [22] demonstrates that masked autoencoders (MAE) with ViTs can learn powerful image representation by reconstructing masked images. To improve the efficiency and performance of MAE, ConvMAE [19] proposes a hybrid approach that performs masked autoencoding at the latent space of 2D CNN-based autoencoder network. Recently, VoxFormer [34] extends the MAE design to 3D for semantic scene completion using 3D deformable attention, and shows great improvement over previous works. However, it is not trivial to scale up the MAE architecture to a higher resolution voxel due to memory constraints. Motivated by ConvMAE [19] and OCNN [62], we propose an efficient OctMAE architecture using sparse 3D operations.", + "bbox": [ + 217, + 540, + 785, + 705 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Proposed Method", + "text_level": 1, + "bbox": [ + 217, + 731, + 424, + 748 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given an RGB image $\\mathbf{I} \\in \\mathbb{R}^{H \\times W \\times 3}$ , depth map $\\mathbf{D} \\in \\mathbb{R}^{H \\times W}$ , and foreground mask $\\mathbf{M} \\in \\mathbb{R}^{H \\times W}$ containing all objects of interest, we aim to predict their complete 3D shapes quickly and accurately. Our framework first encodes an RGB image $\\mathbf{I}$ with a pre-trained image encoder $E$ such as ResNeXt [69] and then lifts the resulting features up to 3D space using a depth map $\\mathbf{D}$ and foreground mask", + "bbox": [ + 217, + 762, + 785, + 839 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "$\\mathbf{M}$ to acquire 3D point cloud features $\\mathbf{F} \\in \\mathbb{R}^{N \\times D}$ and its locations $\\mathbf{P} \\in \\mathbb{R}^{N \\times 3}$ (Section 3.1). Second, we convert the 3D features into an octree using the same algorithm used in [63] and pass it to OctMAE to predict a surface at each LoD (Section 3.2). The diagram of our method is visualized in Figure 2.", + "bbox": [ + 215, + 145, + 785, + 205 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Octree Feature Aggregation", + "text_level": 1, + "bbox": [ + 215, + 227, + 491, + 242 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We adopt ResNeXt-50 [69] as an image encoder to obtain dense and robust image features $\\mathbf{W} = E(\\mathbf{I}) \\in \\mathbb{R}^{H \\times W \\times D}$ from an RGB image. The image features are unprojected into the 3D space using a depth image with $(\\mathbf{F}, \\mathbf{P}) = \\pi^{-1}(\\mathbf{W}, \\mathbf{D}, \\mathbf{M}, \\mathbf{K})$ where a point cloud feature and its corresponding coordinates are represented as $\\mathbf{F}$ and $\\mathbf{P}$ . $\\pi^{-1}$ unprojects the image features $\\mathbf{W}$ to the camera coordinate system using a depth map $\\mathbf{D}$ , foreground mask $\\mathbf{M}$ , and an intrinsic matrix $\\mathbf{K}$ . Next, we define an octree at the level of detail (LoD) of 9 $(512^3)$ with the grid and cell size being $1.28\\mathrm{m}$ and $2.5\\mathrm{mm}$ respectively, and use the point features to populate the voxel grid, averaging features when multiple points fall into the same voxel. Here, LoD- $h$ simply represents resolution of an octree. For instance, the voxel grid of LoD-9 has the maximum dimension of $2^9 = 512$ for each axis. An octree is represented as a set of 8 octants with features at non-empty regions; therefore, it is more memory-efficient than a dense voxel grid. The octree is centered around the z-axis in the camera coordinate system, and its front plane is aligned with the nearest point to the camera along with the z-axis.", + "bbox": [ + 215, + 252, + 787, + 492 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 OctMAE: Octree Masked Autoencoders", + "text_level": 1, + "bbox": [ + 215, + 515, + 591, + 529 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We design OctMAE which leverages Octree U-Net [62] and latent 3D MAE to achieve accurate and efficient zero-shot multi-object scene completion. Octree U-Net consists of multiple sparse 3D convolutional layers. While the Octree U-Net architecture can efficiently encode octree features to low resolution, only local regions are considered at each operation. On the contrary, 3D MAE can capture global object information which helps predict globally consistent 3D shapes. However, unlike an image, a dense voxel grid contains a prohibitive number of tokens even in the latent space, which makes it challenging to adopt an MAE architecture directly for 3D tasks. Recently, ConvMAE [19] proposed to leverage the advantages of both CNNs and MAE in 2D for efficient training. Nevertheless, a naïve extension of ConvMAE [19] to 3D also leads to prohibitive computational and memory costs. To address this issue, we propose a novel occlusion masking strategy and adopt 3D rotary embeddings, enabling efficient masked autoencoding in the latent space.", + "bbox": [ + 215, + 537, + 787, + 750 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Encoder architecture. The encoder of Octree U-Net [63] takes the octree feature at LoD-9 and computes a latent octree feature $\\mathbf{F}_L\\in \\mathbb{R}^{N'\\times D'}$ at LoD-5 where $N^{\\prime}$ is the number of non-empty voxels and $D^{\\prime}$ is the latent feature dimension. To incorporate global symmetric and object scale information which gives more cues about completed shapes, we use $S$ layers of the full self-attention", + "bbox": [ + 215, + 763, + 785, + 839 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Transformer blocks in the latent 3D MAE encoder. Since $N'$ is typically the order of the hundreds to thousands, we resort to memory-efficient attention algorithms [11, 49]. Ideally, learnable relative positional encodings [77] are used to deal with the different alignments of point cloud features inside an octree. However, it requires computing the one-to-one relative positional encoding $N' \\times N'$ times, which largely slows down the training and makes it computationally impractical. Therefore, we use RoPE [59] to encode 3D axial information between voxels. Concretely, we embed position information with RoPE at every multi-head attention layer as", + "bbox": [ + 212, + 146, + 787, + 282 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {R} _ {i} = \\operatorname {d i a g} \\left(R (p _ {i} ^ {x}), R (p _ {i} ^ {y}), R (p _ {i} ^ {z}), \\mathbf {I}\\right) \\in \\mathbb {R} ^ {D ^ {\\prime} \\times D ^ {\\prime}}, \\quad \\mathbf {f} _ {i} ^ {\\prime} = \\mathbf {R} _ {i} \\mathbf {f} _ {i}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 297, + 295, + 785, + 314 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\mathbf{f}_i\\in \\mathbb{R}^{D'}$ , and $\\mathbf{p}_i\\in \\mathbb{R}^3$ is $i$ -th octree feature and coordinates. $R:\\mathbb{R}\\to \\mathbb{R}^{\\left[D' / 3\\right]\\times \\left[D' / 3\\right]}$ is a function to generate a rotation matrix given normalized 1D axial coordinate. The detailed derivation of $\\mathbf{R}$ can be found in the supplemental.", + "bbox": [ + 212, + 316, + 785, + 368 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Occlusion masking. Next, we concatenate mask tokens $\\mathbf{T} \\in \\mathbb{R}^{M \\times D'}$ to the encoded latent octree feature where $M$ is the number of the mask tokens. Note that each of the mask tokens has identical learnable parameters. The key question is how to place them in 3D space. Although previous methods [34] put mask tokens inside all the empty cells of a dense voxel grid, it is unlikely that visible regions extending from the camera to the input depth are occupied unless the error of a depth map is enormous. Further, this dense masking strategy forces us to use a local attention mechanism such as deformable 3D attention used in VoxFormer [34], due to the highly expensive memory and computational cost. To address this issue, we introduce an occlusion masking strategy in which the mask tokens $\\mathbf{T}$ are placed only into occluded voxels. Concretely, we perform depth testing on every voxel within a voxel grid to determine if they are positioned behind objects. Mask tokens are assigned to their respective locations only after passing this test. The proposed occlusion masking strategy and efficient positional encoding enable our latent 3D MAE (Figure 4) to leverage full attention instead of local attention.", + "bbox": [ + 212, + 377, + 787, + 619 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Decoder architecture. The masked octree feature is given to the latent 3D MAE decoder which consists of $S$ layers of the full cross-attention Transformer blocks with RoPE [59] to learn global reasoning including occluded regions. Finally, the decoder of Octree U-Net takes the mixed latent octree feature of the Transformer decoder $\\mathbf{F}_{ML} \\in \\mathbb{R}^{(N' + M) \\times D'}$ as input and upsamples features with skip connections. The decoded feature is passed to a two-layer MLP which estimates an occupancy at LoD- $h$ . In addition, normals and SDF values are predicted only at the final LoD. To avoid unnecessary computation, we prune grid cells predicted as empty with a threshold of 0.5 at every LoD, following [63].", + "bbox": [ + 212, + 630, + 787, + 772 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3 Training Details and Loss Functions", + "text_level": 1, + "bbox": [ + 214, + 787, + 558, + 803 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We use all surface points extracted through OpenVDB [45] during training. The loss function is defined as", + "bbox": [ + 212, + 809, + 785, + 839 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 127 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/254af92f7fe9ab95a825ffe3eb45f3b6340a6ccf883620be06f5bcf4aa03be21.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 148, + 330, + 214 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f1c799d803e65d3317e1324084f35b6c34721ff2fce61c499e62e183cb85b7ee.jpg", + "image_caption": [ + "Fig. 3: Example images of our synthetic dataset. We use BlenderProc [13] to acquire high-quality images under various and realistic illumination conditions." + ], + "image_footnote": [], + "bbox": [ + 218, + 214, + 330, + 279 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5b52f458e72f4c1db3b0873d06af7516389fcc807343932638f2300d8ef5194a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 331, + 148, + 442, + 214 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5d896ae26d44b8cf5da77e0d79f588c56a38f3659384b04219491c65b2024994.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 331, + 214, + 442, + 279 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e558342093bcec274aa64a6636e511ec08d0df4361c422088eb1976edc65f090.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 444, + 148, + 558, + 214 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/564af605784a04fef25391624c35a2e6c9c1b5c5e50f40bee122a3548ba40320.jpg", + "image_caption": [ + "Fig.4: Overall architecture of Latent 3D MAE." + ], + "image_footnote": [], + "bbox": [ + 444, + 214, + 557, + 279 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fa58f1958eafc4f6616d405d511fb81f1b2cfe13ed3cbe664681f7a35857559a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 581, + 146, + 772, + 295 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/197d974c5037848ab4a87c0c874da7b9f5e35a060286f54322466b4af72ed71f.jpg", + "table_caption": [ + "Table 1: Dataset comparisons. We create the first large-scale and diverse 3D scene completion dataset for novel multiple objects using a subset of 3D models from Objverse dataset [12]. The number of categories is reported by using the LVIS categories, and $R^{\\mathrm{LVIS}}(\\%)$ represents a ratio of the number of the categories covered by the dataset. $\\dagger$ denotes the number of objects with actual size." + ], + "table_footnote": [], + "table_body": "
DatasetType3D \nModels# \nFrames# \nObjs# \nCatsR^LVIS(%)
YCB-V [68]Real133K2150.4
HB [28]Real17K33131.0
HOPE [36]Real2K2830.3
CO3D V2 [52]Real6M40K504.2
MegaPose [30]Synthetic1M1K†170.9
OursSynthetic1M12K60150.0
", + "bbox": [ + 250, + 436, + 753, + 541 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\mathcal {L} _ {n r m} + \\mathcal {L} _ {S D F} + \\sum_ {h \\in \\{5, 6, 7, 8, 9 \\}} \\mathcal {L} _ {o c c} ^ {h}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 362, + 584, + 784, + 619 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\mathcal{L}_{nrm}$ and $\\mathcal{L}_{SDF}$ measure the averaged L2 norm of normals and SDF values. $\\mathcal{L}_{occ}^{h}$ computes a mean of binary cross entropy function of each LoD-h.", + "bbox": [ + 214, + 626, + 784, + 657 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4 Dataset", + "text_level": 1, + "bbox": [ + 215, + 683, + 328, + 699 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As shown in Table 1, existing datasets are limited in the diversity of object categories. Although the CO3D V2 dataset [52] contains data for $40\\mathrm{k}$ objects, because the provided ground-truth 3D shapes are reconstructed from unposed multi-view images, they tend to be highly noisy and parts of the object missing due to lack of visibility. To tackle this problem, we leverage Objaverse [12], a large-scale 1M 3D object dataset containing 46k objects with LVIS category annotations. To focus on completion of hand-held objects, we select 601 categories and ensure that the largest dimension of the objects in each category", + "bbox": [ + 212, + 718, + 787, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "falls approximately within the range of $4\\mathrm{cm}$ to $40~\\mathrm{cm}$ . In addition, for high-quality rendering, we omit objects that lack textures, contain more than 10,000 vertices, or are articulated. To increase the number of objects, we add objects from Google Scanned Objects (GSO) [16], which results in 12,655 objects in total. We render 1M images of 25,000 scenes using physics-based rendering and positioning via BlenderProc [13] to simulate realistic scenes (Figure 3). For each image, we randomly choose a camera view such that at least one object is within the camera frame. We also generate 1,000 images using 250 withheld objects for evaluation.", + "bbox": [ + 218, + 146, + 785, + 280 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5 Experimental Results", + "text_level": 1, + "bbox": [ + 218, + 303, + 459, + 320 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Implementation details. We train all the models for 2 epochs using the Adam [29] optimizer with a learning rate of 0.002 and batch size of 16 on NVIDIA A100. Note that the models are only trained on the synthetic dataset introduced in Section 4. In addition, the number of Transformer blocks $K$ , the feature dimension $D$ , and $D'$ are set to 3, 32, and 192 respectively. We use a pretrained model of ResNeXt-50 [69] as an image encoder for all the experiments. The ground-truth occupancy, SDF and normals are computed from meshes with OpenVDB [45]. During training, we dilate ground-truth masks using the radius randomly selected from 1, 3 and 5 pixels to deal with the segmentation error around the object edges. During evaluation, we use ground-truth masks provided by the datasets.", + "bbox": [ + 218, + 332, + 785, + 497 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Evaluation metrics. We report Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) to evaluate the quality of a completed surface. For surface-based methods, we use a predicted surface directly for evaluation. For the methods that predict occupancy, the marching cubes algorithm [41] is used to extract a surface and uniformly sample 100,000 points from its surface such that the number of points are roughly equal to the surface prediction methods. We use mm as a unit for all the reported metrics.", + "bbox": [ + 218, + 510, + 785, + 614 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Evaluation datasets. We evaluate the baselines and our model on one synthetic and three real-world datasets. For the synthetic dataset, we render 1,000 images using textured 3D scans from Objaverse [12], following the same procedure described in Section 4. We randomly choose 3 to 5 objects per image from the withheld objects for Objavese dataset. Since these 3D scans are relatively more complex than the objects seen in the real-world datasets we use, they can provide a good scene completion quality estimate for complex objects. For the real-world dataset, we use the YCB-Video [68], HOPE [36] and HomebrewedDB (HB) [28] datasets. YCB-Video consists of 21 everyday objects with diverse shapes. HOPE contains 28 simple household objects with mostly rectangular and cylindrical everyday shapes, and the images are captured in various lighting conditions in indoor scenes using a RealSense D415 RGBD camera. HB includes 33 objects (e.g., toy, household, and industrial objects). Their images are taken by PrimeSense Carmine in lab-like environments.", + "bbox": [ + 218, + 628, + 785, + 838 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/252febbb465c6223c806dd2fc46f299b5270278fe2d940eee2f0f66705d79776.jpg", + "table_caption": [ + "Table 2: Quantitative evaluation of multi-object scene completion on Ours, YCB-Video [68], HOPE [36], and HomebrewedDB [28] datasets. Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) are reported. Chamfer distance is reported in the unit of mm." + ], + "table_footnote": [], + "table_body": "
Method3D Rep.SyntheticReal
OursYCB-Video [68]HB [28]HOPE [36]
CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑
VoxFormer [34]Dense44.540.3820.65330.320.4380.64134.840.3660.60847.750.323
ShapeFormer [71]Dense39.500.4010.59338.210.3850.58840.930.3280.59439.540.306
MCC [66]Implicit43.370.4590.70035.850.2890.60819.590.3710.65517.530.357
ConvONet [48]Dense23.680.5410.71032.870.4580.64926.710.5040.64320.950.581
POCO [1]Implicit21.110.6340.75315.450.5870.69913.170.6240.70913.200.602
AICNet [31]Dense15.640.5730.74112.260.5450.70211.870.5570.67411.400.564
Minkowski [6]Sparse11.470.7460.8028.040.7610.7178.810.7280.7198.560.734
OCNN [63]Sparse9.050.7820.8287.100.7780.7717.020.7920.7368.050.742
OursSparse6.480.8390.8486.400.8000.7856.140.8190.7706.970.803
", + "bbox": [ + 217, + 213, + 779, + 354 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Baselines. As discussed in Secs. 1 and 2, multi-object scene completion from a single RGB-D image is relatively not explored due to the lack of large-scale and diverse multi-object scene completion datasets. We carefully choose baseline architectures that can support this task with simple or no adaptation. We focus on three primary method types from related fields. Firstly, we select Semantic Scene Completion (SSC) methods [6,31,34,63] that do not heavily rely on domain or categorical knowledge of indoor or outdoor scenes. Secondly, we opt for object shape completion methods [6,63,66,71] that can be extended to multi-object scene completion without an architectural modification and prohibitive memory utilization. Thirdly, we consider voxel or octree-based 3D reconstruction methods [1,6,48,63] that predict a complete and plausible shape using noisy and sparse point cloud data. For dense voxel-based (e.g., AICNet [31], ConvONet [48] and VoxFormer [34]) and sparse voxel-based methods (e.g., MinkowskiNet [6], OCNN [63], and our method), we use LoD-6 and LoD-9 as an input resolution respectively. All the experiments are conducted using the original implementation provided by the authors, with few simple modifications to adapt for multi-object scene completion and a fair comparison. For instance, we extend the baselines that take the point cloud as input by concatenating the image features to the point cloud features. For occupancy-based methods, though their output voxel grid resolution is LoD-6, we use trilinear interpolation to predict occupancy at LoD-7 [48]. For MinkowskiNet [6] and OCNN [62,63], we use the U-Net architecture with the depth of 5 (LoD-9 to LoD-4). We discuss further details about the baseline architectures, their modifications, and hyperparameters in the supplemental.", + "bbox": [ + 215, + 383, + 785, + 744 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "5.1 Quantitative Results", + "text_level": 1, + "bbox": [ + 215, + 768, + 431, + 782 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 2 shows that our method outperforms the baselines on all the metrics and datasets. Although our model is only trained on synthetic data, it demonstrates strong generalizability to real-world datasets. We also remark that our", + "bbox": [ + 215, + 794, + 784, + 839 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 449, + 114, + 730, + 128 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/887145c701b281f92ee2305fb5d24e69271eeddd8ed79bbc38f3be5c3c9d2950.jpg", + "table_caption": [ + "Table 3: Ablation Study of positional encoding on our synthetic dataset. We compare w/o positional encoding, conditional positional encoding (CPE) [7], absolute positional encoding (APE) used in [34], and RoPE [59]." + ], + "table_footnote": [], + "table_body": "
TypeCD↓F1↑NC↑
w/o11.320.7780.808
CPE [7]9.910.7850.811
APE [34]8.610.7820.825
RPE [61]7.810.8040.830
RoPE [59]6.480.8390.848
", + "bbox": [ + 223, + 253, + 447, + 334 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/426131afeeb9886244a184844b6acdb7fef1b0da8e26ce2e6e065cc40c501059.jpg", + "table_caption": [ + "Table 4: Ablation study on 3D attention algorithms. The scores are reported on the HOPE dataset [36]." + ], + "table_footnote": [], + "table_body": "
MethodOcc. MaskingCD↓F1↑Runtime↓
3D DSA [34]12.140.70393.3
Neighbor. Attn. [77]9.260.727130.8
Octree Attn. [61]7.990.752116.4
Neighbor. Attn. [77]8.810.759111.9
Octree Attn. [61]7.540.772105.3
Full + Self Attn.7.210.78586.2
Full + Cross Attn.6.970.80385.1
", + "bbox": [ + 467, + 208, + 787, + 327 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "method exhibits robustness to the noise characteristics present in depth data captured by typical RGB-D cameras despite being trained on noise-free depth data in simulation. The comparisons show that hierarchical structures and the latent 3D MAE are key to predicting 3D shapes of unseen objects more accurately than the baselines. Unlike our method, VoxFormer [34] uses an MAE with 3D deformable attention where only 8 neighbors of the reference points at the finest resolution are considered. Figure 8 also demonstrates that methods using a dense voxel grid or implicit representation fail to generalize to novel shapes. This implies that capturing a right choice of a network architecture is crucial to learn generalizable shape priors for zero-shot multi-object scene completion. Our method has the similar U-Net architecture used in MinkowskiNet [6] and OCNN [62] except we use the latent 3D MAE at LoD-5 instead of making the network deeper. This indicates that the latent 3D MAE can better approximate the shape distribution of the training dataset by leveraging an attention mechanism to capture global 3D contexts. Table 7 also confirms that our method achieves the best scene completion quality by measuring Chamfer distance in visible and occluded regions separately.", + "bbox": [ + 212, + 366, + 787, + 625 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Positional encoding. As shown in Table 3, we explore the effect of RoPE [59] on the validation set of our synthetic dataset. The first row shows that all the metrics significantly drop if positional encoding is not used. In addition, we test CPE [7], APE [34], and RPE [61] and obtain slightly better scores. CPE [7] is typically more effective than APE in tasks such as 3D instance/semantic segmentation and object detection where a complete 3D point cloud is given. However, this result highlights the challenge of capturing position information from mask tokens which initially have the identical parameters. Our method employs RoPE [59] for relative positional embedding. One of the important aspect of RoPE [59] is that it does not have any learnable parameters. Despite this, it demonstrates superior performance compared to other approaches. Although RoPE was originally proposed in the domain of natural language processing, our experiment reveals its effectiveness in multi-object 3D scene completion.", + "bbox": [ + 212, + 643, + 787, + 840 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/29e0228df3d27f9ad1e8916a51ec60fe8c86c4ecea0e0e620f5065a74a09ed47.jpg", + "table_caption": [ + "Table 5: Ablation study of the number of MAE layers on our synthetic dataset." + ], + "table_footnote": [], + "table_body": "
#LayersCD↓F1↑NC↑Runtime↓
19.010.7840.82876.4
36.480.8390.84885.1
55.750.8500.85596.2
", + "bbox": [ + 222, + 198, + 450, + 253 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/a30d7ee977575cf7389c310d0b82d1abab46d061538fb0724b8bcaf9e3a4513a.jpg", + "table_caption": [ + "Table 6: Ablation study of U-Net architectures on HomebrewedDB dataset [28]." + ], + "table_footnote": [], + "table_body": "
ArchitectureCD↓F1↑NC↑Runtime↓
Mink. U-Net [6]7.260.7880.74383.8
OctFormer [61]7.450.7560.728114.4
Octree U-Net [62]6.140.8190.77085.1
", + "bbox": [ + 478, + 191, + 764, + 247 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/60dae0055bd50bd5f9122d56f5d75c4038ddb261b544602d008fbfa795232636.jpg", + "table_caption": [ + "Table 7: Comparisons of the runtime (ms). For reference, we also show Chamfer distance of visible $\\mathrm{CD}_{vis}$ and occluded $\\mathrm{CD}_{occ}$ regions on our synthetic dataset." + ], + "table_footnote": [], + "table_body": "
Method3D Rep.ResolutionCDvis↓CDocc↓CD↓Runtime↓
VoxFormer [34]Dense128318.2566.3244.5479.5
ShapeFormer [71]Dense128314.6163.3339.501.8 × 104
MCC [66]Implicit128315.3963.4144.379.1 × 103
ConvONet [48]Dense128317.0934.0923.6848.4
POCO [1]Implicit128310.3731.5521.11758.8
AICNet [31]Dense12839.9821.4315.6424.2
Minkowski [6]Sparse51237.1215.4411.4778.5
OCNN [63]Sparse51233.8712.169.0580.1
OursSparse51233.299.406.4885.1
", + "bbox": [ + 222, + 311, + 779, + 460 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "3D Attention algorithms. Table 4 reveals that occlusion masking yields better runtime and metrics than dense masking. Furthermore, our experiments suggest that full attention and Octree attention, both characterized by their wider receptive fields, are more effective compared to local attention algorithms such as 3D deformable self-attention (3D DSA) [34] and neighborhood attention [77].", + "bbox": [ + 215, + 492, + 782, + 568 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Number of layers in 3D latent MAE. We further explore the design of 3D latent MAE in Table 5. Increasing the number of layers in 3D latent MAE improves the scene completion quality while making the runtime slower. Consequently, we select 3 layers for a good trade-off between the accuracy and runtime.", + "bbox": [ + 215, + 590, + 784, + 650 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "U-Net architectures. In Table 6, we investigate U-Net architectures. The key difference of Minkowski U-Net [6] is the use of a sparse tensor as an underlying data structure instead of an octree, which gives a slightly better performance than Octree U-Net [62]. OctFormer [61] proposes an octree-based window attention mechanism using the 3D Z-order curve to support a much larger kernel size than Octree U-Net. In general, a wider range of an effective receptive field helps achieve better performance. Nonetheless, OctFormer achieves a chamfer distance and F-1 score of 7.45 and 0.756, which is worse than Octree U-Net by 1.31 and 0.063 respectively. This indicates that the OctFormer's attention mechanism is less effective compared to an Octree U-Net architecture especially in the presence of latent 3D MAE, playing the similar role in the latent space.", + "bbox": [ + 215, + 672, + 785, + 839 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 116, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/0ea8058eb04e3267fa43da6898c2601022d7752e72059663961b789bb480b805.jpg", + "image_caption": [ + "Fig.5: Scaling of the metrics with the number of objects in a training dataset. We conduct the experiments by changing the ratio of the number of objects to $1\\%$ , $5\\%$ , $10\\%$ , $20\\%$ , $40\\%$ , $60\\%$ , $80\\%$ , and $100\\%$ ." + ], + "image_footnote": [], + "bbox": [ + 220, + 142, + 488, + 277 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/2474a71bdc02fc05ba02541364e6fc70303c573314fefffa795613c970d1b654.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 524, + 154, + 602, + 220 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/5dc8a3b8237d0a94607e2e369779e89117c75037db80efafaf8eae870110fd99.jpg", + "image_caption": [ + "Ground-Truth" + ], + "image_footnote": [], + "bbox": [ + 526, + 220, + 607, + 276 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/34f195f4418fedd1ea815c7123ea9b562466bae0fa5d8af4243f0a7b47d5751f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 604, + 156, + 679, + 220 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/2f7e43ce1d961af861f0345d12344dca8a5858d6d94221da8fb1c74fa5252874.jpg", + "image_caption": [ + "OCNN" + ], + "image_footnote": [], + "bbox": [ + 609, + 220, + 691, + 277 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/e507df4a2516c9cd1bd320da647d1ffef8ac43b5b3c53450392434553f7b50fb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 692, + 156, + 764, + 219 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/dd06103c1c8d8a83d0c8f8614d04606ed9db886a1cab32487d72f0d5f67cd520.jpg", + "image_caption": [ + "Ours", + "Fig.6: Qualitative comparison of OCNN [62] and our method. Our proposed latent 3D MAE helps predict globally consistent scene completion." + ], + "image_footnote": [], + "bbox": [ + 697, + 220, + 776, + 275 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Runtime analysis. Table 7 shows the runtime performance of the baselines and our method. For a fair comparison, we run inference over the 50 samples of the HOPE dataset and report the average time. For occupancy-based methods, we predict occupancy on object surfaces and occluded regions. Due to the memory-intensive nature of MCC [1]'s Transformer architecture, we run inference multiple times with the maximum chunk size of 10,000 points. Our experiments demonstrate that implicit 3D representations used in POCO [1] and MCC [66] become slower when the voxel grid resolution is higher. Further, an autoregressive Transformer adopted in ShapeFormer [71] greatly increases the runtime. Conversely, the methods which leverage sparse voxel grids (e.g., MinkowskiNet [6], OCNN [63], and Ours) achieve much faster runtime thanks to efficient sparse 3D convolutions, and hierarchical pruning on predicted surfaces. Our method offers runtimes comparable to the fastest method, while implementing attention operations over the scene via latent 3D MAE, and achieving superior reconstruction.", + "bbox": [ + 214, + 402, + 787, + 616 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Dataset scale analysis. To assess the importance of the large-scale 3D scene completion datasets, we train our model on splits of increasing sizes which contain $1\\%$ , $5\\%$ , $10\\%$ , $20\\%$ , $40\\%$ , $60\\%$ , $80\\%$ , and $100\\%$ of the total number of the objects in our dataset. We report metrics on the test split of our dataset. Section 5.1 shows that all the metrics have a strong correlation with respect to the number of objects. This could imply that the model benefits significantly from increased data diversity and volume, enhancing its ability to understand and complete 3D shapes. We believe that this analysis is crucial for understanding the relationship between data quantity and model performance.", + "bbox": [ + 214, + 628, + 787, + 765 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "5.2 Qualitative Results", + "text_level": 1, + "bbox": [ + 215, + 785, + 421, + 801 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Figure 7 shows the qualitative results of our method on both of the synthetic and real-world datasets from three different views. Unlike the synthetic dataset,", + "bbox": [ + 214, + 809, + 785, + 839 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/68a83c039992abb04eb3d78f674a28b9fccb0af667d968a9aa73bdb28b91f872.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 148, + 285, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/01ce50157c218acb1301490615ef0f915e231d06642dd389ebbd82c82e0c256c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 218, + 190, + 284, + 229 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/036481e9cd8effdea48a8e68f7cfce44b696d491db46f578024ae6a3a4d5d2f1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 217, + 234, + 285, + 275 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/d858976c0c5a57048b024d4b7768de082890337c20df17bba6ad4ae2752a03e7.jpg", + "image_caption": [ + "RGB-D Image" + ], + "image_footnote": [], + "bbox": [ + 217, + 275, + 282, + 313 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/c7200a6ea34c51c535371f63cda2879f9c517d2d04c2230d90062b23965c2403.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 289, + 148, + 361, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/ca9869829b6773ae55b535704b2635b1ebf296b0fdec90f60156f634afaddbc0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 289, + 190, + 361, + 229 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/518b9993aa8ebe75527c8f9494e8a80d0eafcd45d0fe6003bb987e57380f3e04.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 290, + 237, + 359, + 273 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/6c51f99cb97c2014c23d02232645bb2e6ffb539c68c5be0b49b497bdd331d377.jpg", + "image_caption": [ + "View 1" + ], + "image_footnote": [], + "bbox": [ + 290, + 277, + 359, + 314 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/3d72b948acf1587fd75a76a692b9b25db82fd06749d399a8b7e427e1af9e7c19.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 366, + 148, + 441, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/a3b4ca23998ce6ef68a44ab08bd540b72255d22d63213855003866575149a511.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 366, + 190, + 441, + 227 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/2223b06631d7580a1c5de299cd65002bc095b84453dbf4f6e404955be2dce6d0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 374, + 238, + 434, + 273 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/d55eee2806625fa3fbeb381cd4bb873b4824c3ae7e186fc9ffb5db988a8fff80.jpg", + "image_caption": [ + "View 2" + ], + "image_footnote": [], + "bbox": [ + 372, + 277, + 433, + 314 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/51e65fde5c3cbb19d75241cfaec188b9f0d6c894f3bb7c9cd91bd36b2e84d9b4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 444, + 151, + 508, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/03f4fdf2e8f710addeeb3cc5c4924f80f2e3f08d89f0017e6554d39dcde3990c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 444, + 190, + 508, + 227 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/97116a6ba9463180012fefc9680cae23c72aefcb257436054407d5fb3f49e5f7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 444, + 238, + 504, + 273 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/5e8d8d94e6bee9c77129d0820f2ef01e17968d3179342ace79692f4f3c0cdd02.jpg", + "image_caption": [ + "View 3" + ], + "image_footnote": [], + "bbox": [ + 444, + 277, + 503, + 314 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/4b7f2cead40e9f68e4f060e5ff915c70b00a747aec0c0927cb960e492057ace5.jpg", + "image_caption": [], + "image_footnote": [ + "#" + ], + "bbox": [ + 519, + 152, + 584, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/b692ea033b9a7e91ee6399906c496aadf61fd1eb036753aef081ffcb48f04493.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 519, + 190, + 584, + 229 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/aa803c1b8322aa9397dc510957594aea7516abd3055ba4f28eed97fb8089efb6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 519, + 237, + 584, + 273 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/5896208645b37073aab3efe314c3f400ac8e6b036116d03a0fdbb97651b7af0b.jpg", + "image_caption": [ + "RGB-D Image" + ], + "image_footnote": [], + "bbox": [ + 519, + 277, + 583, + 314 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/fad6cc8d5f1c5700fd622d588e2b614926cbe42cb1b623c8d13174cb42d62cbd.jpg", + "image_caption": [], + "image_footnote": [ + "." + ], + "bbox": [ + 602, + 155, + 651, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/0b0ad484f0577e985976a8e235b65620392b5c6d7eacfd0440e964eddc7a4a7e.jpg", + "image_caption": [], + "image_footnote": [ + "" + ], + "bbox": [ + 602, + 191, + 653, + 229 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/8953cd4824f494189fcacdb318a2ff28443f067bcb257672b7e705f311e2e278.jpg", + "image_caption": [], + "image_footnote": [ + "" + ], + "bbox": [ + 593, + 247, + 660, + 266 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/6ed528fcf3154bfea2afaaf73f86329db5ee40bba03c583f6908170f3eee21d8.jpg", + "image_caption": [ + "View 1" + ], + "image_footnote": [], + "bbox": [ + 593, + 287, + 661, + 309 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/9deb22b8382a47f89f61fc48da8a6635bc4ccfbf8b0fc1884869c86b6bfb9d1a.jpg", + "image_caption": [], + "image_footnote": [ + "" + ], + "bbox": [ + 668, + 156, + 710, + 189 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/0aeb28b1ba28af13db394d47e3a4dfc0205173c011559e05bbdf63953a621248.jpg", + "image_caption": [], + "image_footnote": [ + "" + ], + "bbox": [ + 671, + 191, + 710, + 227 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/86a2388cacd2e8b790626a9c2f4067cd7010e98bb4b617ccea2dc1eaa1bb8da2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 666, + 234, + 774, + 275 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/0333414c5c0ff42decbae0b3ed611615caa88e20cf5585a7bde0a9db8ae22618.jpg", + "image_caption": [ + "View 2" + ], + "image_footnote": [], + "bbox": [ + 666, + 277, + 748, + 313 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "", + "bbox": [ + 777, + 165, + 789, + 166 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "", + "bbox": [ + 777, + 165, + 789, + 166 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "tation methods to obtain instance-level completed shapes. Third, our method does not handle uncertainty of surface prediction explicitly. In future work, we plan to extend our method to model uncertainty to improve the scene completion quality and diversity.", + "bbox": [ + 212, + 146, + 787, + 207 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/46bb813236a96528212b701e87d023be165550cc2ab3ec2f57f5f3c7ac365784.jpg", + "image_caption": [ + "Fig. 8: Comparisons on HomebrewedDB dataset (Top), and HOPE (Bottom) datasets. For better visibility, we show the generated and ground truth shapes. The top and bottom rows show an image from near camera and back views respectively. Compared to the other methods, our method predicts accurate and consistent shapes on a challenging scene completion task for novel objects." + ], + "image_footnote": [], + "bbox": [ + 225, + 253, + 787, + 712 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgment", + "text_level": 1, + "bbox": [ + 217, + 143, + 382, + 162 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We thank Zubair Irshad and Jenny Nan for valuable feedback and comments.", + "bbox": [ + 215, + 176, + 782, + 191 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "This research is supported by Toyota Research Institute.", + "bbox": [ + 215, + 193, + 624, + 208 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 233, + 321, + 250 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Boulch, A., Marlet, R.: POCO: Point Convolution for Surface Reconstruction. In: CVPR (2022)", + "2. Bozic, A., Palafox, P., Thies, J., Dai, A., Nießner, M.: TransformerFusion: Monocular rgb scene reconstruction using transformers. In: NeurIPS (2021)", + "3. Chan, E.R., Nagano, K., Chan, M.A., Bergman, A.W., Park, J.J., Levy, A., Aittala, M., Mello, S.D., Karras, T., Wetzstein, G.: GeNVS: Generative novel view synthesis with 3D-aware diffusion models. In: CoRR (2023)", + "4. Chen, H.X., Huang, J., Mu, T.J., Hu, S.M.: CIRCLE: Convolutional Implicit Reconstruction And Completion For Large-Scale Indoor Scene. In: ECCV (2022)", + "5. Cheng, Y.C., Lee, H.Y., Tulyakov, S., Schwing, A.G., Gui, L.Y.: SDFusion: Multimodal 3d shape completion, reconstruction, and generation. In: CVPR (2023)", + "6. Choy, C., Gwak, J., Savarese, S.: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. In: CVPR (2019)", + "7. Chu, X., Tian, Z., Zhang, B., Wang, X., Shen, C.: Conditional Positional Encodings for Vision Transformers. In: ICLR (2023)", + "8. Computer, T.: RedPajama: an Open Dataset for Training Large Language Models (2023)", + "9. Dai, A., Diller, C., Nießner, M.: SG-NN: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. In: CVPR (2020)", + "10. Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., Nießner, M.: ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In: CVPR (2018)", + "1. Dao, T.: FlashAttention-2: Faster attention with better parallelism and work partitioning (2023)", + "2. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A Universe of Annotated 3D Objects. CVPR (2022)", + "3. Denninger, M., Winkelbauer, D., Sundermeyer, M., Boerdijk, W., Knauer, M., Strobl, K.H., Humt, M., Triebel, R.: BlenderProc2: A Procedural Pipeline for Photorealistic Rendering. Journal of Open Source Software (2023)", + "4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: NAACL (2019)", + "5. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR (2021)", + "6. Downs, L., Francis, A., Koenig, N., Kinman, B., Hickman, R., Reymann, K., McHugh, T.B., Vanhoucke, V.: Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items. In: ICRA (2022)", + "7. Duan, Y., Zhu, H., Wang, H., Yi, L., Nevatia, R., Guibas, L.J.: Curriculum deepsdf. In: ECCV (2020)" + ], + "bbox": [ + 225, + 266, + 784, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "18. Dupont, E., Kim, H., Eslami, S.M.A., Rezende, D.J., Rosenbaum, D.: From data to functa: Your data point is a function and you can treat it like one. In: ICML (2022)", + "19. Gao, P., Ma, T., Li, H., Dai, J., Qiao, Y.: ConvMAE: Masked Convolution Meets Masked Autoencoders. NeurIPS (2022)", + "20. Goldblum, M., Finzi, M., Rowan, K., Wilson, A.G.: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning. CoRR (2023)", + "21. Graham, B., Engelcke, M., van der Maaten, L.: 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. CVPR (2018)", + "22. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: CVPR (2022)", + "23. Hou, J., Dai, A., Nießner, M.: RevealNet: Seeing Behind Objects in RGB-D Scans. In: CVPR (2020)", + "24. Huang, J., Gojcic, Z., Atzmon, M., Litany, O., Fidler, S., Williams, F.: Neural Kernel Surface Reconstruction. In: CVPR (2023)", + "25. Irshad, M.Z., Zakharov, S., Ambrus, R., Kollar, T., Kira, Z., Gaidon, A.: Shapo: Implicit representations for multi-object shape, appearance, and pose optimization. In: ECCV (2022)", + "26. Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.: Real-time Perception meets Reactive Motion Generation. RA-L (2018)", + "27. Karaman, S., Frazzoli, E.: Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Rob. Res. (2011)", + "28. Kaskman, R., Zakharov, S., Shugurov, I., Ilic, S.: HomebrewedDB: RGB-D Dataset for 6D Pose Estimation of 3D Objects. ICCVW (2019)", + "29. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)", + "30. Labbé, Y., Manuelli, L., Mousavian, A., Tyree, S., Birchfield, S., Tremblay, J., Carpentier, J., Aubry, M., Fox, D., Sivic, J.: MegaPose: 6d pose estimation of novel objects via render & compare. In: CoRL (2022)", + "31. Li, J., Han, K., Wang, P., Liu, Y., Yuan, X.: Anisotropic Convolutional Networks for 3D Semantic Scene Completion. In: CVPR (2020)", + "32. Li, J., Liu, Y., Gong, D., Shi, Q., Yuan, X., Zhao, C., Reid, I.: RGBD Based Dimensional Decomposition Residual Network for 3D Semantic Scene Completion. In: CVPR. pp. 7693-7702 (June 2019)", + "33. Li*, L.H., Zhang*, P., Zhang*, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.N., Chang, K.W., Gao, J.: Grounded language-image pretraining. In: CVPR (2022)", + "34. Li, Y., Yu, Z., Choy, C., Xiao, C., Alvarez, J.M., Fidler, S., Feng, C., Anandkumar, A.: VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion. In: CVPR (2023)", + "35. Liang, F., Wu, B., Dai, X., Li, K., Zhao, Y., Zhang, H., Zhang, P., Vajda, P., Marculescu, D.: Open-vocabulary semantic segmentation with mask-adapted clip. In: CVPR (2023)", + "36. Lin, Y., Tremblay, J., Tyree, S., Vela, P.A., Birchfield, S.: Multi-view Fusion for Multi-level Robotic Scene Understanding. In: IROS (2021)", + "37. Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural Sparse Voxel Fields. NeurIPS (2020)", + "38. Liu, M., Xu, C., Jin, H., Chen, L., Xu, Z., Su, H., et al.: One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. NeurIPS (2023)" + ], + "bbox": [ + 215, + 146, + 785, + 840 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "39. Liu, R., Wu, R., Hoorick, B.V., Tokmakov, P., Zakharov, S., Vondrick, C.: Zero-1-to-3: Zero-shot One Image to 3D Object. In: CVPR (2023)", + "40. Liu, Z., Feng, Y., Black, M.J., Nowrouzezahrai, D., Paull, L., Liu, W.: MeshDiffusion: Score-based Generative 3D Mesh Modeling. In: ICLR (2023)", + "41. Lorensen, W.E., Cline, H.E.: Marching Cubes: A High Resolution 3D Surface Construction Algorithm. SIGGRAPH (1987)", + "42. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy Networks: Learning 3D Reconstruction in Function Space. In: CVPR (2019)", + "43. Mittal, P., Cheng, Y.C., Singh, M., Tulsiani, S.: AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation. In: CVPR (2022)", + "44. Mohammadi, S.S., Duarte, N.F., Dimou, D., Wang, Y., Taiana, M., Morerio, P., Dehban, A., Moreno, P., Bernardino, A., Del Bue, A., Santos-Victor, J.: 3DSGrasp: 3D Shape-Completion for Robotic Grasp. In: ICRA (2023)", + "45. Museth, K.: VDB: High-resolution sparse volumes with dynamic topology (2013)", + "46. Okumura, K., Défago, X.: Quick Multi-Robot Motion Planning by Combining Sampling and Search. In: IJCAI (2023)", + "47. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In: CVPR (2019)", + "48. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional Occupancy Networks. In: ECCV (2020)", + "49. Rabe, M.N., Staats, C.: Self-attention Does Not Need $O(n^{2})$ Memory (2021)", + "50. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021)", + "51. Radford, A., Narasimhan, K.: Improving Language Understanding by Generative Pre-Training (2018)", + "52. Reizenstein, J., Shapovalov, R., Henzler, P., Sbordone, L., Labatut, P., Novotny, D.: Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction. In: ICCV (2021)", + "53. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-Resolution Image Synthesis with Latent Diffusion Models (2021)", + "54. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortzman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS (2022)", + "55. Shao, T., Yang, Y., Weng, Y., Hou, Q., Zhou, K.: H-CNN: Spatial Hashing Based CNN for 3D Shape Analysis. TVCG (2020)", + "56. Shen, T., Gao, J., Yin, K., Liu, M.Y., Fidler, S.: Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis. In: NeurIPS (2021)", + "57. Shi, Z., Zhou, X., Qiu, X., Zhu, X.: Improving image captioning with better use of captions. CoRR (2020)", + "58. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic Scene Completion from a Single Depth Image. CVPR (2017)", + "59. Su, J., Lu, Y., Pan, S., Wen, B., Liu, Y.: RoFormer: Enhanced Transformer with Rotary Position Embedding. In: ICLR (2020)", + "60. Varley, J., DeChant, C., Richardson, A., Ruales, J., Allen, P.: Shape completion enabled robotic grasping. In: IROS (2017)", + "61. Wang, P.S.: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH (2023)", + "62. Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: Octree-Based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH (2017)" + ], + "bbox": [ + 212, + 146, + 784, + 839 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Zero-Shot Multi-Object Scene Completion", + "bbox": [ + 447, + 114, + 730, + 128 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "63. Wang, P.S., Liu, Y., Tong, X.: Deep Octree-based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion. In: CVPRW (2020)", + "64. Watson, D., Chan, W., Martin-Brualla, R., Ho, J., Tagliasacchi, A., Norouzi, M.: Novel View Synthesis with Diffusion Models. CoRR (2022)", + "65. Williams, F., Gojcic, Z., Khamis, S., Zorin, D., Bruna, J., Fidler, S., Litany, O.: Neural Fields as Learnable Kernels for 3D Reconstruction. In: CVPR (2022)", + "66. Wu, C.Y., Johnson, J., Malik, J., Feichtenhofer, C., Gkioxari, G.: Multiview Compressive Coding for 3D Reconstruction. In: CVPR (2023)", + "67. Wu, X., Lao, Y., Jiang, L., Liu, X., Zhao, H.: Point transformer V2: Grouped Vector Attention and Partition-based Pooling. In: NeurIPS (2022)", + "68. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes (2018)", + "69. Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated Residual Transformations for Deep Neural Networks. CVPR (2017)", + "70. Xu, J., Liu, S., Vahdat, A., Byeon, W., Wang, X., De Mello, S.: ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models. CVPR (2023)", + "71. Yan, X., Lin, L., Mitra, N.J., Lischinski, D., Cohen-Or, D., Huang, H.: Shape-Former: Transformer-based Shape Completion via Sparse Representation. In: CVPR (2022)", + "72. Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers. In: ICCV (2021)", + "73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. CVPR (2022)", + "74. Zhang, D., Choi, C., Park, I., Kim, Y.M.: Probabilistic Implicit Scene Completion. In: ICLR (2022)", + "75. Zhang, H., Zhang, P., Hu, X., Chen, Y.C., Li, L.H., Dai, X., Wang, L., Yuan, L., Hwang, J.N., Gao, J.: GLIPv2: Unifying Localization and Vision-Language Understanding. CoRR (2022)", + "76. Zhang, P., Liu, W., Lei, Y., Lu, H., Yang, X.: Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion. In: ICCV (2019)", + "77. Zhao, H., Jiang, L., Jia, J., Torr, P.H., Koltun, V.: Point transformer. In: ICCV (2021)", + "78. Zhu, Y., Tian, Y., Mexatas, D., Dollar, P.: Semantic Amodal Segmentation. In: CVPR (2017)" + ], + "bbox": [ + 215, + 146, + 784, + 632 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "S. Iwase et al.", + "bbox": [ + 271, + 114, + 364, + 127 + ], + "page_idx": 17 + } +] \ No newline at end of file diff --git a/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_model.json b/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7e1f018b433461ce0683a58fa7c371a38f3d8a58 --- /dev/null +++ b/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_model.json @@ -0,0 +1,3096 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.26, + 0.142, + 0.743, + 0.164 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "text", + "bbox": [ + 0.268, + 0.19, + 0.735, + 0.222 + ], + "angle": 0, + "content": "Shun Iwase\\(^{1,2}\\), Katherine Liu\\(^{2}\\), Vitor Guizilini\\(^{2}\\), Adrien Gaidon\\(^{2}\\), Kris Kitani\\(^{1,\\star}\\), Rares Ambrus\\(^{2,\\star}\\), and Sergey Zakharov\\(^{2,\\star}\\)" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.233, + 0.601, + 0.248 + ], + "angle": 0, + "content": "1 Carnegie Mellon University" + }, + { + "type": "text", + "bbox": [ + 0.407, + 0.248, + 0.595, + 0.261 + ], + "angle": 0, + "content": "\\(^{2}\\) Toyota Research Institute" + }, + { + "type": "list", + "bbox": [ + 0.402, + 0.233, + 0.601, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.296, + 0.306, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.308, + 0.308, + 0.321, + 0.341 + ], + "angle": 0, + "content": "Fronr View" + }, + { + "type": "image", + "bbox": [ + 0.324, + 0.297, + 0.414, + 0.353 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.415, + 0.298, + 0.501, + 0.352 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.296, + 0.593, + 0.349 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.595, + 0.3, + 0.688, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.694, + 0.298, + 0.78, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.355, + 0.306, + 0.408 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.228, + 0.409, + 0.297, + 0.42 + ], + "angle": 0, + "content": "RGB-D Image" + }, + { + "type": "image", + "bbox": [ + 0.31, + 0.363, + 0.414, + 0.407 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.415, + 0.362, + 0.501, + 0.408 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.355, + 0.593, + 0.408 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.595, + 0.365, + 0.605, + 0.396 + ], + "angle": 0, + "content": "Bae" + }, + { + "type": "image", + "bbox": [ + 0.614, + 0.354, + 0.679, + 0.408 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.6, + 0.41, + 0.696, + 0.42 + ], + "angle": 0, + "content": "Completed 3D Shape" + }, + { + "type": "image", + "bbox": [ + 0.702, + 0.353, + 0.768, + 0.408 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.712, + 0.409, + 0.776, + 0.42 + ], + "angle": 0, + "content": "Ground-Truth" + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.437, + 0.788, + 0.507 + ], + "angle": 0, + "content": "Fig. 1: Given an RGB-D image and the foreground mask of multiple objects not seen during training, our method predicts their complete 3D shapes quickly and accurately, including occluded areas. (Left) Synthetic image results. (Right) Zero-shot generalization to a real-world image of household objects with noisy depth data. Our 3D results are rotated with respect to the input to highlight completions in occluded regions." + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.536, + 0.741, + 0.786 + ], + "angle": 0, + "content": "Abstract. We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image. Despite notable advancements in single-object 3D shape completion, high-quality reconstructions in highly cluttered real-world multi-object scenes remains a challenge. To address this issue, we propose OctMAE, an architecture that leverages an Octree U-Net and a latent 3D MAE to achieve high-quality and near real-time multi-object scene completion through both local and global geometric reasoning. Because a naive 3D MAE can be computationally intractable and memory intensive even in the latent space, we introduce a novel occlusion masking strategy and adopt 3D rotary embeddings, which significantly improve the runtime and scene completion quality. To generalize to a wide range of objects in diverse scenes, we create a large-scale photorealistic dataset, featuring a diverse set of 12K 3D object models from the Objaverse dataset that are rendered in multi-object scenes with physics-based positioning. Our method outperforms the current state-of-the-art on both synthetic and real-world datasets and demonstrates a strong zero-shot capability. https://sh8.io/#/oct_mae" + }, + { + "type": "page_footnote", + "bbox": [ + 0.221, + 0.826, + 0.338, + 0.84 + ], + "angle": 0, + "content": "* Equal advising." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "image", + "bbox": [ + 0.226, + 0.148, + 0.784, + 0.248 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.258, + 0.788, + 0.385 + ], + "angle": 0, + "content": "Fig. 2: Overview of our proposed method (OctMAE). Given an input RGB Image \\(\\mathbf{I}\\), depth map \\(\\mathbf{D}\\), and a foreground mask \\(\\mathbf{M}\\), the octree feature \\(\\mathbf{F}\\) is obtained by unprojecting an image feature encoded by a pre-trained image encoder \\(\\mathbf{E}\\). The octree feature is then encoded by the Octree encoder and downsampled to the Level of Detail (LoD) of 5. The notation LoD-\\(h\\) indicates that each axis of the voxel grid has resolution of \\(2^h\\). The latent 3D MAE takes the encoded Octree feature \\(\\mathbf{F}\\) as input and its output feature is concatenated with the occlusion mask tokens \\(\\mathbf{T}\\). Next, the masked decoded feature \\(\\mathbf{F}_{ML}\\) is computed by sparse 3D MAE decoder. Finally, the Octree decoder predicts a completed surface at LoD-9." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.413, + 0.376, + 0.429 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.447, + 0.787, + 0.522 + ], + "angle": 0, + "content": "Humans can instantly imagine complete shapes of multiple novel objects in a cluttered scene via advanced geometric and semantic reasoning. This ability is also essential for robots if they are to effectively perform useful tasks in the real world [26, 27, 46, 60]. In this work, we propose a method that can quickly and accurately complete a wide number of objects in diverse real-world scenes." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.523, + 0.788, + 0.75 + ], + "angle": 0, + "content": "Prior works [31, 34, 36, 43, 47, 71] have achieved phenomenal progress in scene and object shape completion from a single RGB-D image. Object-centric methods [17, 25] in particular can achieve very high reconstruction accuracy by relying on category-specific shape priors. However, when deployed on entire scenes such methods require bespoke instance detection/segmentation models, and often perform test-time optimization which is time consuming and would hinder real-time deployment on a robot. Moreover, existing methods are typically limited to a small set of categories. Thus, zero-shot multi-object scene completion remains a challenging and open problem that has seen little success to date. This is in stark contrast to the sudden increase in powerful algorithms for 2D computer vision tasks such as object detection [33, 75] and image segmentation [35, 70]. We attribute this progress to a great extent to the availability of large-scale datasets [8, 54] coupled with neural architectures and learning objectives [22, 50, 53, 57] that can effectively exploit the highly structured data occurring in the natural world [20]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Taking inspiration from the latest developments in the 2D domain, we propose a scene completion algorithm at the scene level that generalizes across a large number of shapes and that only supposes an RGB-D image and foreground mask as input. Our method consists of Octree masked autoencoders (OctMAE) — a hybrid architecture of Octree U-Net and a latent 3D MAE (Figure 2). Although a recent work, VoxFormer [34], also extends MAE architecture to 3D" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.448, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.283 + ], + "angle": 0, + "content": "using deformable 3D attention and shows great improvement in semantic scene completion tasks, its memory utilization is still prohibitive to handle a higher resolution voxel grid. We address this issue by integrating 3D MAE into the latent space of Octree U-Net. Our experiments show that the latent 3D MAE is the key to global structure understanding and leads to strong performance and generalization across all datasets. Moreover, we find that the choice of a masking strategy and 3D positional embeddings is crucial to achieve better performance. We provide extensive ablations to verify that our 3D latent MAE design is effective." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.283, + 0.788, + 0.404 + ], + "angle": 0, + "content": "Our second contribution consists of the creation of a novel synthetic dataset to counteract the lack of large-scale and diverse 3D datasets. The dataset contains 12K 3D models of hand-held objects from Objaverse [12] and GSO [16] datasets (Figure 3). We utilize the dataset to conduct a comprehensive evaluation of our method as well as other baselines and show that our method scales and achieves better results. Finally, we perform zero-shot evaluations on synthetic as well as real datasets and show that a combination of 3D diversity coupled with an appropriate architecture is key to generalizable scene completion in the wild." + }, + { + "type": "text", + "bbox": [ + 0.24, + 0.404, + 0.593, + 0.418 + ], + "angle": 0, + "content": "Our contributions can be summarized as follows:" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.428, + 0.787, + 0.502 + ], + "angle": 0, + "content": "- We present a novel network architecture, Octree Masked Autoencoders (OctMAE), a hybrid architecture of Octree U-Net and latent 3D MAE, which achieves state-of-the-art results on all the benchmarks. Further, we introduce a simple occlusion masking strategy with full attention, which boosts the performance of a latent 3D MAE." + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.503, + 0.787, + 0.548 + ], + "angle": 0, + "content": "- We create the first large-scale and diverse synthetic dataset using Objaverse [12] dataset for zero-shot multi-object scene completion, and provide a wide range of benchmark and analysis." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.569, + 0.388, + 0.585 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.599, + 0.789, + 0.844 + ], + "angle": 0, + "content": "3D reconstruction and completion. Reconstructing indoor scenes and objects from a noisy point cloud has been widely explored [1, 2, 4, 6, 9, 10, 23, 24, 34, 40, 42, 47, 48, 56, 65, 66]. Several works [4, 5, 43, 44, 47, 58, 60, 63, 71, 72, 74, 76] tackle more challenging shape completion tasks where large parts of a target is missing. While these methods achieve impressive results, they do not explicitly consider semantic information, which may limit their capability for accurate shape completion. Recent methods [31, 32, 34, 76] in Semantic Scene Completion (SSC) leverage semantic information via an RGB image. Nevertheless, the number of target categories is quite limited, restricting its utility for a broad range of applications in the real world. In addition, many methods adopt occupancy or SDF as an output representation, which necessitates post-processing such as the marching cubes [41] and sphere tracing to extract an explicit surface. As another direction, GeNVS [3], Zero-1-to-3 [39], and 3DiM [64] explore single-view 3D reconstruction via novel view synthesis. However, expensive test-time optimization is required. Recently, One-2-3-45 [38] and MCC [66] attempt to improve the generation speed, however, their runtime for multi-object scenes is still far from near" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.116, + 0.365, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.147, + 0.786, + 0.236 + ], + "angle": 0, + "content": "real-time. Further, since these methods are object-centric, multiple objects in a single scene are not handled well due to the complicated geometric reasoning especially caused by occlusions by other objects. In this paper, we propose a general and near real-time framework for multi-object 3D scene completion in the wild using only an RGB-D image and foreground mask without expensive test-time optimization." + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.254, + 0.786, + 0.525 + ], + "angle": 0, + "content": "Implicit 3D representations. Recently, various types of implicit 3D representation have become popular in 3D reconstruction and completion tasks. Early works [18,42,47] use a one-dimensional latent feature to represent a 3D shape as occupancy and SDF fields. Several works [31,48,58] employ voxels, groundplanes, and triplanes, demonstrating that the retention of geometric information using 3D CNNs enhances performance. Although the voxel representation typically performs well among these three, its cubic memory and computational costs make increasing resolution challenging. To mitigate this issue, sparse voxels [6,21,37,55,62] treat a 3D representation as a sparse set of structured points using the octree and hash table and perform convolutions only on non-empty voxels and its neighbors. Further, the high-resolution sparse voxel enables a direct prediction of a target surface. As another direction, [1,67,77] leverage point cloud. Nonetheless, an unstructured set of points can be non-uniformly distributed in the 3D space and requires running the k-NN algorithm at every operation. This aspect often renders point-based methods less appealing compared to the sparse voxel representation. Therefore, our method adopts an octree-based representation used in [62] for efficient training and direct surface prediction." + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.541, + 0.786, + 0.707 + ], + "angle": 0, + "content": "Masked Autoencoders (MAE). Inspired by the success of ViTs [15, 73] and masked language modeling [14, 51], [22] demonstrates that masked autoencoders (MAE) with ViTs can learn powerful image representation by reconstructing masked images. To improve the efficiency and performance of MAE, ConvMAE [19] proposes a hybrid approach that performs masked autoencoding at the latent space of 2D CNN-based autoencoder network. Recently, VoxFormer [34] extends the MAE design to 3D for semantic scene completion using 3D deformable attention, and shows great improvement over previous works. However, it is not trivial to scale up the MAE architecture to a higher resolution voxel due to memory constraints. Motivated by ConvMAE [19] and OCNN [62], we propose an efficient OctMAE architecture using sparse 3D operations." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.732, + 0.425, + 0.749 + ], + "angle": 0, + "content": "3 Proposed Method" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.763, + 0.786, + 0.84 + ], + "angle": 0, + "content": "Given an RGB image \\(\\mathbf{I} \\in \\mathbb{R}^{H \\times W \\times 3}\\), depth map \\(\\mathbf{D} \\in \\mathbb{R}^{H \\times W}\\), and foreground mask \\(\\mathbf{M} \\in \\mathbb{R}^{H \\times W}\\) containing all objects of interest, we aim to predict their complete 3D shapes quickly and accurately. Our framework first encodes an RGB image \\(\\mathbf{I}\\) with a pre-trained image encoder \\(E\\) such as ResNeXt [69] and then lifts the resulting features up to 3D space using a depth map \\(\\mathbf{D}\\) and foreground mask" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.448, + 0.115, + 0.732, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.146, + 0.787, + 0.207 + ], + "angle": 0, + "content": "\\(\\mathbf{M}\\) to acquire 3D point cloud features \\(\\mathbf{F} \\in \\mathbb{R}^{N \\times D}\\) and its locations \\(\\mathbf{P} \\in \\mathbb{R}^{N \\times 3}\\) (Section 3.1). Second, we convert the 3D features into an octree using the same algorithm used in [63] and pass it to OctMAE to predict a surface at each LoD (Section 3.2). The diagram of our method is visualized in Figure 2." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.228, + 0.492, + 0.243 + ], + "angle": 0, + "content": "3.1 Octree Feature Aggregation" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.253, + 0.788, + 0.493 + ], + "angle": 0, + "content": "We adopt ResNeXt-50 [69] as an image encoder to obtain dense and robust image features \\(\\mathbf{W} = E(\\mathbf{I}) \\in \\mathbb{R}^{H \\times W \\times D}\\) from an RGB image. The image features are unprojected into the 3D space using a depth image with \\((\\mathbf{F}, \\mathbf{P}) = \\pi^{-1}(\\mathbf{W}, \\mathbf{D}, \\mathbf{M}, \\mathbf{K})\\) where a point cloud feature and its corresponding coordinates are represented as \\(\\mathbf{F}\\) and \\(\\mathbf{P}\\). \\(\\pi^{-1}\\) unprojects the image features \\(\\mathbf{W}\\) to the camera coordinate system using a depth map \\(\\mathbf{D}\\), foreground mask \\(\\mathbf{M}\\), and an intrinsic matrix \\(\\mathbf{K}\\). Next, we define an octree at the level of detail (LoD) of 9 \\((512^3)\\) with the grid and cell size being \\(1.28\\mathrm{m}\\) and \\(2.5\\mathrm{mm}\\) respectively, and use the point features to populate the voxel grid, averaging features when multiple points fall into the same voxel. Here, LoD-\\(h\\) simply represents resolution of an octree. For instance, the voxel grid of LoD-9 has the maximum dimension of \\(2^9 = 512\\) for each axis. An octree is represented as a set of 8 octants with features at non-empty regions; therefore, it is more memory-efficient than a dense voxel grid. The octree is centered around the z-axis in the camera coordinate system, and its front plane is aligned with the nearest point to the camera along with the z-axis." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.516, + 0.592, + 0.53 + ], + "angle": 0, + "content": "3.2 OctMAE: Octree Masked Autoencoders" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.539, + 0.788, + 0.751 + ], + "angle": 0, + "content": "We design OctMAE which leverages Octree U-Net [62] and latent 3D MAE to achieve accurate and efficient zero-shot multi-object scene completion. Octree U-Net consists of multiple sparse 3D convolutional layers. While the Octree U-Net architecture can efficiently encode octree features to low resolution, only local regions are considered at each operation. On the contrary, 3D MAE can capture global object information which helps predict globally consistent 3D shapes. However, unlike an image, a dense voxel grid contains a prohibitive number of tokens even in the latent space, which makes it challenging to adopt an MAE architecture directly for 3D tasks. Recently, ConvMAE [19] proposed to leverage the advantages of both CNNs and MAE in 2D for efficient training. Nevertheless, a naïve extension of ConvMAE [19] to 3D also leads to prohibitive computational and memory costs. To address this issue, we propose a novel occlusion masking strategy and adopt 3D rotary embeddings, enabling efficient masked autoencoding in the latent space." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.765, + 0.787, + 0.84 + ], + "angle": 0, + "content": "Encoder architecture. The encoder of Octree U-Net [63] takes the octree feature at LoD-9 and computes a latent octree feature \\(\\mathbf{F}_L\\in \\mathbb{R}^{N'\\times D'}\\) at LoD-5 where \\(N^{\\prime}\\) is the number of non-empty voxels and \\(D^{\\prime}\\) is the latent feature dimension. To incorporate global symmetric and object scale information which gives more cues about completed shapes, we use \\(S\\) layers of the full self-attention" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.128 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.284 + ], + "angle": 0, + "content": "Transformer blocks in the latent 3D MAE encoder. Since \\( N' \\) is typically the order of the hundreds to thousands, we resort to memory-efficient attention algorithms [11, 49]. Ideally, learnable relative positional encodings [77] are used to deal with the different alignments of point cloud features inside an octree. However, it requires computing the one-to-one relative positional encoding \\( N' \\times N' \\) times, which largely slows down the training and makes it computationally impractical. Therefore, we use RoPE [59] to encode 3D axial information between voxels. Concretely, we embed position information with RoPE at every multi-head attention layer as" + }, + { + "type": "equation", + "bbox": [ + 0.299, + 0.296, + 0.786, + 0.315 + ], + "angle": 0, + "content": "\\[\n\\mathbf {R} _ {i} = \\operatorname {d i a g} \\left(R (p _ {i} ^ {x}), R (p _ {i} ^ {y}), R (p _ {i} ^ {z}), \\mathbf {I}\\right) \\in \\mathbb {R} ^ {D ^ {\\prime} \\times D ^ {\\prime}}, \\quad \\mathbf {f} _ {i} ^ {\\prime} = \\mathbf {R} _ {i} \\mathbf {f} _ {i}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.318, + 0.787, + 0.369 + ], + "angle": 0, + "content": "where \\(\\mathbf{f}_i\\in \\mathbb{R}^{D'}\\), and \\(\\mathbf{p}_i\\in \\mathbb{R}^3\\) is \\(i\\)-th octree feature and coordinates. \\(R:\\mathbb{R}\\to \\mathbb{R}^{\\left[D' / 3\\right]\\times \\left[D' / 3\\right]}\\) is a function to generate a rotation matrix given normalized 1D axial coordinate. The detailed derivation of \\(\\mathbf{R}\\) can be found in the supplemental." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.378, + 0.789, + 0.621 + ], + "angle": 0, + "content": "Occlusion masking. Next, we concatenate mask tokens \\(\\mathbf{T} \\in \\mathbb{R}^{M \\times D'}\\) to the encoded latent octree feature where \\(M\\) is the number of the mask tokens. Note that each of the mask tokens has identical learnable parameters. The key question is how to place them in 3D space. Although previous methods [34] put mask tokens inside all the empty cells of a dense voxel grid, it is unlikely that visible regions extending from the camera to the input depth are occupied unless the error of a depth map is enormous. Further, this dense masking strategy forces us to use a local attention mechanism such as deformable 3D attention used in VoxFormer [34], due to the highly expensive memory and computational cost. To address this issue, we introduce an occlusion masking strategy in which the mask tokens \\(\\mathbf{T}\\) are placed only into occluded voxels. Concretely, we perform depth testing on every voxel within a voxel grid to determine if they are positioned behind objects. Mask tokens are assigned to their respective locations only after passing this test. The proposed occlusion masking strategy and efficient positional encoding enable our latent 3D MAE (Figure 4) to leverage full attention instead of local attention." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.631, + 0.789, + 0.773 + ], + "angle": 0, + "content": "Decoder architecture. The masked octree feature is given to the latent 3D MAE decoder which consists of \\(S\\) layers of the full cross-attention Transformer blocks with RoPE [59] to learn global reasoning including occluded regions. Finally, the decoder of Octree U-Net takes the mixed latent octree feature of the Transformer decoder \\(\\mathbf{F}_{ML} \\in \\mathbb{R}^{(N' + M) \\times D'}\\) as input and upsamples features with skip connections. The decoded feature is passed to a two-layer MLP which estimates an occupancy at LoD-\\(h\\). In addition, normals and SDF values are predicted only at the final LoD. To avoid unnecessary computation, we prune grid cells predicted as empty with a threshold of 0.5 at every LoD, following [63]." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.789, + 0.56, + 0.804 + ], + "angle": 0, + "content": "3.3 Training Details and Loss Functions" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.84 + ], + "angle": 0, + "content": "We use all surface points extracted through OpenVDB [45] during training. The loss function is defined as" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.449, + 0.115, + 0.731, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.15, + 0.331, + 0.215 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.215, + 0.331, + 0.28 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.332, + 0.15, + 0.444, + 0.215 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.332, + 0.215, + 0.444, + 0.28 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.446, + 0.15, + 0.559, + 0.215 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.446, + 0.215, + 0.558, + 0.28 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.291, + 0.56, + 0.332 + ], + "angle": 0, + "content": "Fig. 3: Example images of our synthetic dataset. We use BlenderProc [13] to acquire high-quality images under various and realistic illumination conditions." + }, + { + "type": "image", + "bbox": [ + 0.582, + 0.147, + 0.774, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.575, + 0.31, + 0.78, + 0.338 + ], + "angle": 0, + "content": "Fig.4: Overall architecture of Latent 3D MAE." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.355, + 0.788, + 0.425 + ], + "angle": 0, + "content": "Table 1: Dataset comparisons. We create the first large-scale and diverse 3D scene completion dataset for novel multiple objects using a subset of 3D models from Objverse dataset [12]. The number of categories is reported by using the LVIS categories, and \\( R^{\\mathrm{LVIS}}(\\%) \\) represents a ratio of the number of the categories covered by the dataset. \\( \\dagger \\) denotes the number of objects with actual size." + }, + { + "type": "table", + "bbox": [ + 0.25, + 0.437, + 0.754, + 0.542 + ], + "angle": 0, + "content": "
DatasetType3D \nModels# \nFrames# \nObjs# \nCatsR^LVIS(%)
YCB-V [68]Real133K2150.4
HB [28]Real17K33131.0
HOPE [36]Real2K2830.3
CO3D V2 [52]Real6M40K504.2
MegaPose [30]Synthetic1M1K†170.9
OursSynthetic1M12K60150.0
" + }, + { + "type": "equation", + "bbox": [ + 0.364, + 0.585, + 0.785, + 0.62 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\mathcal {L} _ {n r m} + \\mathcal {L} _ {S D F} + \\sum_ {h \\in \\{5, 6, 7, 8, 9 \\}} \\mathcal {L} _ {o c c} ^ {h}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.627, + 0.785, + 0.658 + ], + "angle": 0, + "content": "where \\(\\mathcal{L}_{nrm}\\) and \\(\\mathcal{L}_{SDF}\\) measure the averaged L2 norm of normals and SDF values. \\(\\mathcal{L}_{occ}^{h}\\) computes a mean of binary cross entropy function of each LoD-h." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.684, + 0.329, + 0.7 + ], + "angle": 0, + "content": "4 Dataset" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.719, + 0.788, + 0.842 + ], + "angle": 0, + "content": "As shown in Table 1, existing datasets are limited in the diversity of object categories. Although the CO3D V2 dataset [52] contains data for \\(40\\mathrm{k}\\) objects, because the provided ground-truth 3D shapes are reconstructed from unposed multi-view images, they tend to be highly noisy and parts of the object missing due to lack of visibility. To tackle this problem, we leverage Objaverse [12], a large-scale 1M 3D object dataset containing 46k objects with LVIS category annotations. To focus on completion of hand-held objects, we select 601 categories and ensure that the largest dimension of the objects in each category" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.116, + 0.365, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "text", + "bbox": [ + 0.219, + 0.147, + 0.786, + 0.281 + ], + "angle": 0, + "content": "falls approximately within the range of \\(4\\mathrm{cm}\\) to \\(40~\\mathrm{cm}\\). In addition, for high-quality rendering, we omit objects that lack textures, contain more than 10,000 vertices, or are articulated. To increase the number of objects, we add objects from Google Scanned Objects (GSO) [16], which results in 12,655 objects in total. We render 1M images of 25,000 scenes using physics-based rendering and positioning via BlenderProc [13] to simulate realistic scenes (Figure 3). For each image, we randomly choose a camera view such that at least one object is within the camera frame. We also generate 1,000 images using 250 withheld objects for evaluation." + }, + { + "type": "title", + "bbox": [ + 0.219, + 0.304, + 0.46, + 0.321 + ], + "angle": 0, + "content": "5 Experimental Results" + }, + { + "type": "text", + "bbox": [ + 0.219, + 0.333, + 0.786, + 0.498 + ], + "angle": 0, + "content": "Implementation details. We train all the models for 2 epochs using the Adam [29] optimizer with a learning rate of 0.002 and batch size of 16 on NVIDIA A100. Note that the models are only trained on the synthetic dataset introduced in Section 4. In addition, the number of Transformer blocks \\( K \\), the feature dimension \\( D \\), and \\( D' \\) are set to 3, 32, and 192 respectively. We use a pretrained model of ResNeXt-50 [69] as an image encoder for all the experiments. The ground-truth occupancy, SDF and normals are computed from meshes with OpenVDB [45]. During training, we dilate ground-truth masks using the radius randomly selected from 1, 3 and 5 pixels to deal with the segmentation error around the object edges. During evaluation, we use ground-truth masks provided by the datasets." + }, + { + "type": "text", + "bbox": [ + 0.219, + 0.511, + 0.786, + 0.616 + ], + "angle": 0, + "content": "Evaluation metrics. We report Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) to evaluate the quality of a completed surface. For surface-based methods, we use a predicted surface directly for evaluation. For the methods that predict occupancy, the marching cubes algorithm [41] is used to extract a surface and uniformly sample 100,000 points from its surface such that the number of points are roughly equal to the surface prediction methods. We use mm as a unit for all the reported metrics." + }, + { + "type": "text", + "bbox": [ + 0.219, + 0.629, + 0.786, + 0.839 + ], + "angle": 0, + "content": "Evaluation datasets. We evaluate the baselines and our model on one synthetic and three real-world datasets. For the synthetic dataset, we render 1,000 images using textured 3D scans from Objaverse [12], following the same procedure described in Section 4. We randomly choose 3 to 5 objects per image from the withheld objects for Objavese dataset. Since these 3D scans are relatively more complex than the objects seen in the real-world datasets we use, they can provide a good scene completion quality estimate for complex objects. For the real-world dataset, we use the YCB-Video [68], HOPE [36] and HomebrewedDB (HB) [28] datasets. YCB-Video consists of 21 everyday objects with diverse shapes. HOPE contains 28 simple household objects with mostly rectangular and cylindrical everyday shapes, and the images are captured in various lighting conditions in indoor scenes using a RealSense D415 RGBD camera. HB includes 33 objects (e.g., toy, household, and industrial objects). Their images are taken by PrimeSense Carmine in lab-like environments." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.45, + 0.115, + 0.732, + 0.13 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.145, + 0.785, + 0.2 + ], + "angle": 0, + "content": "Table 2: Quantitative evaluation of multi-object scene completion on Ours, YCB-Video [68], HOPE [36], and HomebrewedDB [28] datasets. Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) are reported. Chamfer distance is reported in the unit of mm." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.214, + 0.78, + 0.356 + ], + "angle": 0, + "content": "
Method3D Rep.SyntheticReal
OursYCB-Video [68]HB [28]HOPE [36]
CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑
VoxFormer [34]Dense44.540.3820.65330.320.4380.64134.840.3660.60847.750.323
ShapeFormer [71]Dense39.500.4010.59338.210.3850.58840.930.3280.59439.540.306
MCC [66]Implicit43.370.4590.70035.850.2890.60819.590.3710.65517.530.357
ConvONet [48]Dense23.680.5410.71032.870.4580.64926.710.5040.64320.950.581
POCO [1]Implicit21.110.6340.75315.450.5870.69913.170.6240.70913.200.602
AICNet [31]Dense15.640.5730.74112.260.5450.70211.870.5570.67411.400.564
Minkowski [6]Sparse11.470.7460.8028.040.7610.7178.810.7280.7198.560.734
OCNN [63]Sparse9.050.7820.8287.100.7780.7717.020.7920.7368.050.742
OursSparse6.480.8390.8486.400.8000.7856.140.8190.7706.970.803
" + }, + { + "type": "text", + "bbox": [ + 0.216, + 0.384, + 0.787, + 0.745 + ], + "angle": 0, + "content": "Baselines. As discussed in Secs. 1 and 2, multi-object scene completion from a single RGB-D image is relatively not explored due to the lack of large-scale and diverse multi-object scene completion datasets. We carefully choose baseline architectures that can support this task with simple or no adaptation. We focus on three primary method types from related fields. Firstly, we select Semantic Scene Completion (SSC) methods [6,31,34,63] that do not heavily rely on domain or categorical knowledge of indoor or outdoor scenes. Secondly, we opt for object shape completion methods [6,63,66,71] that can be extended to multi-object scene completion without an architectural modification and prohibitive memory utilization. Thirdly, we consider voxel or octree-based 3D reconstruction methods [1,6,48,63] that predict a complete and plausible shape using noisy and sparse point cloud data. For dense voxel-based (e.g., AICNet [31], ConvONet [48] and VoxFormer [34]) and sparse voxel-based methods (e.g., MinkowskiNet [6], OCNN [63], and our method), we use LoD-6 and LoD-9 as an input resolution respectively. All the experiments are conducted using the original implementation provided by the authors, with few simple modifications to adapt for multi-object scene completion and a fair comparison. For instance, we extend the baselines that take the point cloud as input by concatenating the image features to the point cloud features. For occupancy-based methods, though their output voxel grid resolution is LoD-6, we use trilinear interpolation to predict occupancy at LoD-7 [48]. For MinkowskiNet [6] and OCNN [62,63], we use the U-Net architecture with the depth of 5 (LoD-9 to LoD-4). We discuss further details about the baseline architectures, their modifications, and hyperparameters in the supplemental." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.769, + 0.433, + 0.784 + ], + "angle": 0, + "content": "5.1 Quantitative Results" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.795, + 0.785, + 0.84 + ], + "angle": 0, + "content": "Table 2 shows that our method outperforms the baselines on all the metrics and datasets. Although our model is only trained on synthetic data, it demonstrates strong generalizability to real-world datasets. We also remark that our" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.145, + 0.449, + 0.244 + ], + "angle": 0, + "content": "Table 3: Ablation Study of positional encoding on our synthetic dataset. We compare w/o positional encoding, conditional positional encoding (CPE) [7], absolute positional encoding (APE) used in [34], and RoPE [59]." + }, + { + "type": "table", + "bbox": [ + 0.224, + 0.254, + 0.449, + 0.335 + ], + "angle": 0, + "content": "
TypeCD↓F1↑NC↑
w/o11.320.7780.808
CPE [7]9.910.7850.811
APE [34]8.610.7820.825
RPE [61]7.810.8040.830
RoPE [59]6.480.8390.848
" + }, + { + "type": "table_caption", + "bbox": [ + 0.463, + 0.153, + 0.785, + 0.196 + ], + "angle": 0, + "content": "Table 4: Ablation study on 3D attention algorithms. The scores are reported on the HOPE dataset [36]." + }, + { + "type": "table", + "bbox": [ + 0.468, + 0.209, + 0.789, + 0.328 + ], + "angle": 0, + "content": "
MethodOcc. MaskingCD↓F1↑Runtime↓
3D DSA [34]12.140.70393.3
Neighbor. Attn. [77]9.260.727130.8
Octree Attn. [61]7.990.752116.4
Neighbor. Attn. [77]8.810.759111.9
Octree Attn. [61]7.540.772105.3
Full + Self Attn.7.210.78586.2
Full + Cross Attn.6.970.80385.1
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.367, + 0.788, + 0.625 + ], + "angle": 0, + "content": "method exhibits robustness to the noise characteristics present in depth data captured by typical RGB-D cameras despite being trained on noise-free depth data in simulation. The comparisons show that hierarchical structures and the latent 3D MAE are key to predicting 3D shapes of unseen objects more accurately than the baselines. Unlike our method, VoxFormer [34] uses an MAE with 3D deformable attention where only 8 neighbors of the reference points at the finest resolution are considered. Figure 8 also demonstrates that methods using a dense voxel grid or implicit representation fail to generalize to novel shapes. This implies that capturing a right choice of a network architecture is crucial to learn generalizable shape priors for zero-shot multi-object scene completion. Our method has the similar U-Net architecture used in MinkowskiNet [6] and OCNN [62] except we use the latent 3D MAE at LoD-5 instead of making the network deeper. This indicates that the latent 3D MAE can better approximate the shape distribution of the training dataset by leveraging an attention mechanism to capture global 3D contexts. Table 7 also confirms that our method achieves the best scene completion quality by measuring Chamfer distance in visible and occluded regions separately." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Positional encoding. As shown in Table 3, we explore the effect of RoPE [59] on the validation set of our synthetic dataset. The first row shows that all the metrics significantly drop if positional encoding is not used. In addition, we test CPE [7], APE [34], and RPE [61] and obtain slightly better scores. CPE [7] is typically more effective than APE in tasks such as 3D instance/semantic segmentation and object detection where a complete 3D point cloud is given. However, this result highlights the challenge of capturing position information from mask tokens which initially have the identical parameters. Our method employs RoPE [59] for relative positional embedding. One of the important aspect of RoPE [59] is that it does not have any learnable parameters. Despite this, it demonstrates superior performance compared to other approaches. Although RoPE was originally proposed in the domain of natural language processing, our experiment reveals its effectiveness in multi-object 3D scene completion." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.449, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.145, + 0.449, + 0.186 + ], + "angle": 0, + "content": "Table 5: Ablation study of the number of MAE layers on our synthetic dataset." + }, + { + "type": "table", + "bbox": [ + 0.223, + 0.199, + 0.452, + 0.254 + ], + "angle": 0, + "content": "
#LayersCD↓F1↑NC↑Runtime↓
19.010.7840.82876.4
36.480.8390.84885.1
55.750.8500.85596.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.462, + 0.152, + 0.784, + 0.181 + ], + "angle": 0, + "content": "Table 6: Ablation study of U-Net architectures on HomebrewedDB dataset [28]." + }, + { + "type": "table", + "bbox": [ + 0.48, + 0.193, + 0.766, + 0.248 + ], + "angle": 0, + "content": "
ArchitectureCD↓F1↑NC↑Runtime↓
Mink. U-Net [6]7.260.7880.74383.8
OctFormer [61]7.450.7560.728114.4
Octree U-Net [62]6.140.8190.77085.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.272, + 0.784, + 0.3 + ], + "angle": 0, + "content": "Table 7: Comparisons of the runtime (ms). For reference, we also show Chamfer distance of visible \\(\\mathrm{CD}_{vis}\\) and occluded \\(\\mathrm{CD}_{occ}\\) regions on our synthetic dataset." + }, + { + "type": "table", + "bbox": [ + 0.223, + 0.313, + 0.78, + 0.461 + ], + "angle": 0, + "content": "
Method3D Rep.ResolutionCDvis↓CDocc↓CD↓Runtime↓
VoxFormer [34]Dense128318.2566.3244.5479.5
ShapeFormer [71]Dense128314.6163.3339.501.8 × 104
MCC [66]Implicit128315.3963.4144.379.1 × 103
ConvONet [48]Dense128317.0934.0923.6848.4
POCO [1]Implicit128310.3731.5521.11758.8
AICNet [31]Dense12839.9821.4315.6424.2
Minkowski [6]Sparse51237.1215.4411.4778.5
OCNN [63]Sparse51233.8712.169.0580.1
OursSparse51233.299.406.4885.1
" + }, + { + "type": "text", + "bbox": [ + 0.216, + 0.493, + 0.784, + 0.569 + ], + "angle": 0, + "content": "3D Attention algorithms. Table 4 reveals that occlusion masking yields better runtime and metrics than dense masking. Furthermore, our experiments suggest that full attention and Octree attention, both characterized by their wider receptive fields, are more effective compared to local attention algorithms such as 3D deformable self-attention (3D DSA) [34] and neighborhood attention [77]." + }, + { + "type": "text", + "bbox": [ + 0.216, + 0.591, + 0.785, + 0.651 + ], + "angle": 0, + "content": "Number of layers in 3D latent MAE. We further explore the design of 3D latent MAE in Table 5. Increasing the number of layers in 3D latent MAE improves the scene completion quality while making the runtime slower. Consequently, we select 3 layers for a good trade-off between the accuracy and runtime." + }, + { + "type": "text", + "bbox": [ + 0.216, + 0.674, + 0.786, + 0.84 + ], + "angle": 0, + "content": "U-Net architectures. In Table 6, we investigate U-Net architectures. The key difference of Minkowski U-Net [6] is the use of a sparse tensor as an underlying data structure instead of an octree, which gives a slightly better performance than Octree U-Net [62]. OctFormer [61] proposes an octree-based window attention mechanism using the 3D Z-order curve to support a much larger kernel size than Octree U-Net. In general, a wider range of an effective receptive field helps achieve better performance. Nonetheless, OctFormer achieves a chamfer distance and F-1 score of 7.45 and 0.756, which is worse than Octree U-Net by 1.31 and 0.063 respectively. This indicates that the OctFormer's attention mechanism is less effective compared to an Octree U-Net architecture especially in the presence of latent 3D MAE, playing the similar role in the latent space." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "image", + "bbox": [ + 0.221, + 0.143, + 0.49, + 0.279 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.305, + 0.496, + 0.376 + ], + "angle": 0, + "content": "Fig.5: Scaling of the metrics with the number of objects in a training dataset. We conduct the experiments by changing the ratio of the number of objects to \\(1\\%\\), \\(5\\%\\), \\(10\\%\\), \\(20\\%\\), \\(40\\%\\), \\(60\\%\\), \\(80\\%\\), and \\(100\\%\\)." + }, + { + "type": "image", + "bbox": [ + 0.525, + 0.155, + 0.603, + 0.221 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.527, + 0.222, + 0.609, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.536, + 0.284, + 0.592, + 0.294 + ], + "angle": 0, + "content": "Ground-Truth" + }, + { + "type": "image", + "bbox": [ + 0.606, + 0.157, + 0.681, + 0.221 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.611, + 0.222, + 0.692, + 0.278 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.639, + 0.285, + 0.668, + 0.294 + ], + "angle": 0, + "content": "OCNN" + }, + { + "type": "image", + "bbox": [ + 0.693, + 0.157, + 0.766, + 0.22 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.699, + 0.222, + 0.777, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.725, + 0.285, + 0.745, + 0.294 + ], + "angle": 0, + "content": "Ours" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.309, + 0.788, + 0.366 + ], + "angle": 0, + "content": "Fig.6: Qualitative comparison of OCNN [62] and our method. Our proposed latent 3D MAE helps predict globally consistent scene completion." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.404, + 0.789, + 0.617 + ], + "angle": 0, + "content": "Runtime analysis. Table 7 shows the runtime performance of the baselines and our method. For a fair comparison, we run inference over the 50 samples of the HOPE dataset and report the average time. For occupancy-based methods, we predict occupancy on object surfaces and occluded regions. Due to the memory-intensive nature of MCC [1]'s Transformer architecture, we run inference multiple times with the maximum chunk size of 10,000 points. Our experiments demonstrate that implicit 3D representations used in POCO [1] and MCC [66] become slower when the voxel grid resolution is higher. Further, an autoregressive Transformer adopted in ShapeFormer [71] greatly increases the runtime. Conversely, the methods which leverage sparse voxel grids (e.g., MinkowskiNet [6], OCNN [63], and Ours) achieve much faster runtime thanks to efficient sparse 3D convolutions, and hierarchical pruning on predicted surfaces. Our method offers runtimes comparable to the fastest method, while implementing attention operations over the scene via latent 3D MAE, and achieving superior reconstruction." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.629, + 0.789, + 0.766 + ], + "angle": 0, + "content": "Dataset scale analysis. To assess the importance of the large-scale 3D scene completion datasets, we train our model on splits of increasing sizes which contain \\(1\\%\\), \\(5\\%\\), \\(10\\%\\), \\(20\\%\\), \\(40\\%\\), \\(60\\%\\), \\(80\\%\\), and \\(100\\%\\) of the total number of the objects in our dataset. We report metrics on the test split of our dataset. Section 5.1 shows that all the metrics have a strong correlation with respect to the number of objects. This could imply that the model benefits significantly from increased data diversity and volume, enhancing its ability to understand and complete 3D shapes. We believe that this analysis is crucial for understanding the relationship between data quantity and model performance." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.786, + 0.422, + 0.802 + ], + "angle": 0, + "content": "5.2 Qualitative Results" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.81, + 0.787, + 0.84 + ], + "angle": 0, + "content": "Figure 7 shows the qualitative results of our method on both of the synthetic and real-world datasets from three different views. Unlike the synthetic dataset," + } + ], + [ + { + "type": "header", + "bbox": [ + 0.449, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.15, + 0.287, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.191, + 0.285, + 0.23 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.235, + 0.286, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.218, + 0.276, + 0.284, + 0.314 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.228, + 0.316, + 0.275, + 0.324 + ], + "angle": 0, + "content": "RGB-D Image" + }, + { + "type": "image", + "bbox": [ + 0.29, + 0.15, + 0.362, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.29, + 0.191, + 0.362, + 0.23 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.291, + 0.238, + 0.361, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.291, + 0.279, + 0.361, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.321, + 0.317, + 0.345, + 0.324 + ], + "angle": 0, + "content": "View 1" + }, + { + "type": "image", + "bbox": [ + 0.367, + 0.15, + 0.442, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.367, + 0.191, + 0.442, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.375, + 0.239, + 0.435, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.373, + 0.279, + 0.434, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.391, + 0.317, + 0.417, + 0.324 + ], + "angle": 0, + "content": "View 2" + }, + { + "type": "image", + "bbox": [ + 0.445, + 0.152, + 0.509, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.445, + 0.191, + 0.509, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.445, + 0.239, + 0.506, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.445, + 0.279, + 0.504, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.465, + 0.317, + 0.487, + 0.324 + ], + "angle": 0, + "content": "View 3" + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.153, + 0.585, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.191, + 0.585, + 0.23 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.238, + 0.585, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.279, + 0.584, + 0.315 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.53, + 0.317, + 0.576, + 0.324 + ], + "angle": 0, + "content": "RGB-D Image" + }, + { + "type": "image_footnote", + "bbox": [ + 0.592, + 0.164, + 0.601, + 0.174 + ], + "angle": 0, + "content": "#" + }, + { + "type": "image", + "bbox": [ + 0.604, + 0.156, + 0.653, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.604, + 0.193, + 0.654, + 0.231 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.594, + 0.248, + 0.661, + 0.267 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.594, + 0.289, + 0.662, + 0.31 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.625, + 0.317, + 0.648, + 0.324 + ], + "angle": 0, + "content": "View 1" + }, + { + "type": "image", + "bbox": [ + 0.669, + 0.157, + 0.712, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.673, + 0.193, + 0.712, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.668, + 0.236, + 0.775, + 0.276 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.668, + 0.279, + 0.749, + 0.314 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.684, + 0.316, + 0.706, + 0.323 + ], + "angle": 0, + "content": "View 2" + }, + { + "type": "image_footnote", + "bbox": [ + 0.756, + 0.156, + 0.768, + 0.166 + ], + "angle": 0, + "content": "." + }, + { + "type": "image_footnote", + "bbox": [ + 0.769, + 0.157, + 0.779, + 0.166 + ], + "angle": 0, + "content": "" + }, + { + "type": "image_footnote", + "bbox": [ + 0.779, + 0.157, + 0.791, + 0.166 + ], + "angle": 0, + "content": "" + }, + { + "type": "image_footnote", + "bbox": [ + 0.779, + 0.166, + 0.79, + 0.167 + ], + "angle": 0, + "content": "" + }, + { + "type": "image_footnote", + "bbox": [ + 0.779, + 0.166, + 0.79, + 0.167 + ], + "angle": 0, + "content": "" + }, + { + "type": "image_footnote", + "bbox": [ + 0.769, + 0.166, + 0.779, + 0.167 + ], + "angle": 0, + "content": "" + }, + { + "type": "image_footnote", + "bbox": [ + 0.769, + 0.167, + 0.779, + 0.168 + ], + "angle": 0, + "content": "" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.208 + ], + "angle": 0, + "content": "tation methods to obtain instance-level completed shapes. Third, our method does not handle uncertainty of surface prediction explicitly. In future work, we plan to extend our method to model uncertainty to improve the scene completion quality and diversity." + }, + { + "type": "image", + "bbox": [ + 0.226, + 0.254, + 0.789, + 0.713 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.725, + 0.788, + 0.797 + ], + "angle": 0, + "content": "Fig. 8: Comparisons on HomebrewedDB dataset (Top), and HOPE (Bottom) datasets. For better visibility, we show the generated and ground truth shapes. The top and bottom rows show an image from near camera and back views respectively. Compared to the other methods, our method predicts accurate and consistent shapes on a challenging scene completion task for novel objects." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.449, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.383, + 0.163 + ], + "angle": 0, + "content": "Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.178, + 0.784, + 0.193 + ], + "angle": 0, + "content": "We thank Zubair Irshad and Jenny Nan for valuable feedback and comments." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.194, + 0.625, + 0.209 + ], + "angle": 0, + "content": "This research is supported by Toyota Research Institute." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.234, + 0.323, + 0.25 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.267, + 0.785, + 0.295 + ], + "angle": 0, + "content": "1. Boulch, A., Marlet, R.: POCO: Point Convolution for Surface Reconstruction. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.296, + 0.785, + 0.323 + ], + "angle": 0, + "content": "2. Bozic, A., Palafox, P., Thies, J., Dai, A., Nießner, M.: TransformerFusion: Monocular rgb scene reconstruction using transformers. In: NeurIPS (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.324, + 0.785, + 0.365 + ], + "angle": 0, + "content": "3. Chan, E.R., Nagano, K., Chan, M.A., Bergman, A.W., Park, J.J., Levy, A., Aittala, M., Mello, S.D., Karras, T., Wetzstein, G.: GeNVS: Generative novel view synthesis with 3D-aware diffusion models. In: CoRR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.365, + 0.785, + 0.393 + ], + "angle": 0, + "content": "4. Chen, H.X., Huang, J., Mu, T.J., Hu, S.M.: CIRCLE: Convolutional Implicit Reconstruction And Completion For Large-Scale Indoor Scene. In: ECCV (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.394, + 0.785, + 0.421 + ], + "angle": 0, + "content": "5. Cheng, Y.C., Lee, H.Y., Tulyakov, S., Schwing, A.G., Gui, L.Y.: SDFusion: Multimodal 3d shape completion, reconstruction, and generation. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.422, + 0.785, + 0.449 + ], + "angle": 0, + "content": "6. Choy, C., Gwak, J., Savarese, S.: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. In: CVPR (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.449, + 0.785, + 0.477 + ], + "angle": 0, + "content": "7. Chu, X., Tian, Z., Zhang, B., Wang, X., Shen, C.: Conditional Positional Encodings for Vision Transformers. In: ICLR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.478, + 0.785, + 0.504 + ], + "angle": 0, + "content": "8. Computer, T.: RedPajama: an Open Dataset for Training Large Language Models (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.505, + 0.785, + 0.533 + ], + "angle": 0, + "content": "9. Dai, A., Diller, C., Nießner, M.: SG-NN: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. In: CVPR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.534, + 0.785, + 0.575 + ], + "angle": 0, + "content": "10. Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., Nießner, M.: ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In: CVPR (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.576, + 0.785, + 0.603 + ], + "angle": 0, + "content": "1. Dao, T.: FlashAttention-2: Faster attention with better parallelism and work partitioning (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.604, + 0.785, + 0.644 + ], + "angle": 0, + "content": "2. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A Universe of Annotated 3D Objects. CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.645, + 0.785, + 0.687 + ], + "angle": 0, + "content": "3. Denninger, M., Winkelbauer, D., Sundermeyer, M., Boerdijk, W., Knauer, M., Strobl, K.H., Humt, M., Triebel, R.: BlenderProc2: A Procedural Pipeline for Photorealistic Rendering. Journal of Open Source Software (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.688, + 0.785, + 0.714 + ], + "angle": 0, + "content": "4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: NAACL (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.715, + 0.785, + 0.77 + ], + "angle": 0, + "content": "5. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.771, + 0.785, + 0.812 + ], + "angle": 0, + "content": "6. Downs, L., Francis, A., Koenig, N., Kinman, B., Hickman, R., Reymann, K., McHugh, T.B., Vanhoucke, V.: Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items. In: ICRA (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "7. Duan, Y., Zhu, H., Wang, H., Yi, L., Nevatia, R., Guibas, L.J.: Curriculum deepsdf. In: ECCV (2020)" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.267, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.127 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.787, + 0.189 + ], + "angle": 0, + "content": "18. Dupont, E., Kim, H., Eslami, S.M.A., Rezende, D.J., Rosenbaum, D.: From data to functa: Your data point is a function and you can treat it like one. In: ICML (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.189, + 0.786, + 0.217 + ], + "angle": 0, + "content": "19. Gao, P., Ma, T., Li, H., Dai, J., Qiao, Y.: ConvMAE: Masked Convolution Meets Masked Autoencoders. NeurIPS (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.217, + 0.786, + 0.257 + ], + "angle": 0, + "content": "20. Goldblum, M., Finzi, M., Rowan, K., Wilson, A.G.: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning. CoRR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.257, + 0.786, + 0.284 + ], + "angle": 0, + "content": "21. Graham, B., Engelcke, M., van der Maaten, L.: 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. CVPR (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.284, + 0.786, + 0.311 + ], + "angle": 0, + "content": "22. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.311, + 0.786, + 0.338 + ], + "angle": 0, + "content": "23. Hou, J., Dai, A., Nießner, M.: RevealNet: Seeing Behind Objects in RGB-D Scans. In: CVPR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.338, + 0.786, + 0.365 + ], + "angle": 0, + "content": "24. Huang, J., Gojcic, Z., Atzmon, M., Litany, O., Fidler, S., Williams, F.: Neural Kernel Surface Reconstruction. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.365, + 0.786, + 0.405 + ], + "angle": 0, + "content": "25. Irshad, M.Z., Zakharov, S., Ambrus, R., Kollar, T., Kira, Z., Gaidon, A.: Shapo: Implicit representations for multi-object shape, appearance, and pose optimization. In: ECCV (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.405, + 0.786, + 0.447 + ], + "angle": 0, + "content": "26. Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.: Real-time Perception meets Reactive Motion Generation. RA-L (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.447, + 0.786, + 0.473 + ], + "angle": 0, + "content": "27. Karaman, S., Frazzoli, E.: Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Rob. Res. (2011)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.473, + 0.786, + 0.501 + ], + "angle": 0, + "content": "28. Kaskman, R., Zakharov, S., Shugurov, I., Ilic, S.: HomebrewedDB: RGB-D Dataset for 6D Pose Estimation of 3D Objects. ICCVW (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.501, + 0.786, + 0.528 + ], + "angle": 0, + "content": "29. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.528, + 0.786, + 0.569 + ], + "angle": 0, + "content": "30. Labbé, Y., Manuelli, L., Mousavian, A., Tyree, S., Birchfield, S., Tremblay, J., Carpentier, J., Aubry, M., Fox, D., Sivic, J.: MegaPose: 6d pose estimation of novel objects via render & compare. In: CoRL (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.569, + 0.786, + 0.596 + ], + "angle": 0, + "content": "31. Li, J., Han, K., Wang, P., Liu, Y., Yuan, X.: Anisotropic Convolutional Networks for 3D Semantic Scene Completion. In: CVPR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.596, + 0.786, + 0.637 + ], + "angle": 0, + "content": "32. Li, J., Liu, Y., Gong, D., Shi, Q., Yuan, X., Zhao, C., Reid, I.: RGBD Based Dimensional Decomposition Residual Network for 3D Semantic Scene Completion. In: CVPR. pp. 7693-7702 (June 2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.637, + 0.786, + 0.677 + ], + "angle": 0, + "content": "33. Li*, L.H., Zhang*, P., Zhang*, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.N., Chang, K.W., Gao, J.: Grounded language-image pretraining. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.677, + 0.786, + 0.717 + ], + "angle": 0, + "content": "34. Li, Y., Yu, Z., Choy, C., Xiao, C., Alvarez, J.M., Fidler, S., Feng, C., Anandkumar, A.: VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.718, + 0.786, + 0.759 + ], + "angle": 0, + "content": "35. Liang, F., Wu, B., Dai, X., Li, K., Zhao, Y., Zhang, H., Zhang, P., Vajda, P., Marculescu, D.: Open-vocabulary semantic segmentation with mask-adapted clip. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.759, + 0.786, + 0.786 + ], + "angle": 0, + "content": "36. Lin, Y., Tremblay, J., Tyree, S., Vela, P.A., Birchfield, S.: Multi-view Fusion for Multi-level Robotic Scene Understanding. In: IROS (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.786, + 0.786, + 0.813 + ], + "angle": 0, + "content": "37. Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural Sparse Voxel Fields. NeurIPS (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.813, + 0.786, + 0.841 + ], + "angle": 0, + "content": "38. Liu, M., Xu, C., Jin, H., Chen, L., Xu, Z., Su, H., et al.: One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. NeurIPS (2023)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.787, + 0.841 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.449, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-Shot Multi-Object Scene Completion" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "39. Liu, R., Wu, R., Hoorick, B.V., Tokmakov, P., Zakharov, S., Vondrick, C.: Zero-1-to-3: Zero-shot One Image to 3D Object. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.175, + 0.785, + 0.202 + ], + "angle": 0, + "content": "40. Liu, Z., Feng, Y., Black, M.J., Nowrouzezahrai, D., Paull, L., Liu, W.: MeshDiffusion: Score-based Generative 3D Mesh Modeling. In: ICLR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.203, + 0.785, + 0.229 + ], + "angle": 0, + "content": "41. Lorensen, W.E., Cline, H.E.: Marching Cubes: A High Resolution 3D Surface Construction Algorithm. SIGGRAPH (1987)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.23, + 0.785, + 0.257 + ], + "angle": 0, + "content": "42. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy Networks: Learning 3D Reconstruction in Function Space. In: CVPR (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.257, + 0.785, + 0.283 + ], + "angle": 0, + "content": "43. Mittal, P., Cheng, Y.C., Singh, M., Tulsiani, S.: AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.284, + 0.785, + 0.324 + ], + "angle": 0, + "content": "44. Mohammadi, S.S., Duarte, N.F., Dimou, D., Wang, Y., Taiana, M., Morerio, P., Dehban, A., Moreno, P., Bernardino, A., Del Bue, A., Santos-Victor, J.: 3DSGrasp: 3D Shape-Completion for Robotic Grasp. In: ICRA (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.325, + 0.785, + 0.338 + ], + "angle": 0, + "content": "45. Museth, K.: VDB: High-resolution sparse volumes with dynamic topology (2013)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.338, + 0.785, + 0.365 + ], + "angle": 0, + "content": "46. Okumura, K., Défago, X.: Quick Multi-Robot Motion Planning by Combining Sampling and Search. In: IJCAI (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.365, + 0.785, + 0.405 + ], + "angle": 0, + "content": "47. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In: CVPR (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.406, + 0.785, + 0.433 + ], + "angle": 0, + "content": "48. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional Occupancy Networks. In: ECCV (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.433, + 0.785, + 0.447 + ], + "angle": 0, + "content": "49. Rabe, M.N., Staats, C.: Self-attention Does Not Need \\( O(n^{2}) \\) Memory (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.447, + 0.785, + 0.487 + ], + "angle": 0, + "content": "50. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.488, + 0.785, + 0.514 + ], + "angle": 0, + "content": "51. Radford, A., Narasimhan, K.: Improving Language Understanding by Generative Pre-Training (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.514, + 0.785, + 0.555 + ], + "angle": 0, + "content": "52. Reizenstein, J., Shapovalov, R., Henzler, P., Sbordone, L., Labatut, P., Novotny, D.: Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction. In: ICCV (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.555, + 0.785, + 0.582 + ], + "angle": 0, + "content": "53. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-Resolution Image Synthesis with Latent Diffusion Models (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.583, + 0.785, + 0.623 + ], + "angle": 0, + "content": "54. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortzman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.623, + 0.785, + 0.651 + ], + "angle": 0, + "content": "55. Shao, T., Yang, Y., Weng, Y., Hou, Q., Zhou, K.: H-CNN: Spatial Hashing Based CNN for 3D Shape Analysis. TVCG (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.651, + 0.785, + 0.677 + ], + "angle": 0, + "content": "56. Shen, T., Gao, J., Yin, K., Liu, M.Y., Fidler, S.: Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis. In: NeurIPS (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.677, + 0.785, + 0.704 + ], + "angle": 0, + "content": "57. Shi, Z., Zhou, X., Qiu, X., Zhu, X.: Improving image captioning with better use of captions. CoRR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.705, + 0.785, + 0.732 + ], + "angle": 0, + "content": "58. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic Scene Completion from a Single Depth Image. CVPR (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.732, + 0.785, + 0.759 + ], + "angle": 0, + "content": "59. Su, J., Lu, Y., Pan, S., Wen, B., Liu, Y.: RoFormer: Enhanced Transformer with Rotary Position Embedding. In: ICLR (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.759, + 0.785, + 0.786 + ], + "angle": 0, + "content": "60. Varley, J., DeChant, C., Richardson, A., Ruales, J., Allen, P.: Shape completion enabled robotic grasping. In: IROS (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.786, + 0.785, + 0.813 + ], + "angle": 0, + "content": "61. Wang, P.S.: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "62. Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: Octree-Based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH (2017)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "18" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.366, + 0.128 + ], + "angle": 0, + "content": "S. Iwase et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "63. Wang, P.S., Liu, Y., Tong, X.: Deep Octree-based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion. In: CVPRW (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.177, + 0.785, + 0.203 + ], + "angle": 0, + "content": "64. Watson, D., Chan, W., Martin-Brualla, R., Ho, J., Tagliasacchi, A., Norouzi, M.: Novel View Synthesis with Diffusion Models. CoRR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.204, + 0.785, + 0.231 + ], + "angle": 0, + "content": "65. Williams, F., Gojcic, Z., Khamis, S., Zorin, D., Bruna, J., Fidler, S., Litany, O.: Neural Fields as Learnable Kernels for 3D Reconstruction. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.232, + 0.785, + 0.259 + ], + "angle": 0, + "content": "66. Wu, C.Y., Johnson, J., Malik, J., Feichtenhofer, C., Gkioxari, G.: Multiview Compressive Coding for 3D Reconstruction. In: CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.26, + 0.785, + 0.286 + ], + "angle": 0, + "content": "67. Wu, X., Lao, Y., Jiang, L., Liu, X., Zhao, H.: Point transformer V2: Grouped Vector Attention and Partition-based Pooling. In: NeurIPS (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.287, + 0.785, + 0.314 + ], + "angle": 0, + "content": "68. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.315, + 0.785, + 0.342 + ], + "angle": 0, + "content": "69. Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated Residual Transformations for Deep Neural Networks. CVPR (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.343, + 0.785, + 0.383 + ], + "angle": 0, + "content": "70. Xu, J., Liu, S., Vahdat, A., Byeon, W., Wang, X., De Mello, S.: ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models. CVPR (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.384, + 0.785, + 0.424 + ], + "angle": 0, + "content": "71. Yan, X., Lin, L., Mitra, N.J., Lischinski, D., Cohen-Or, D., Huang, H.: Shape-Former: Transformer-based Shape Completion via Sparse Representation. In: CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.425, + 0.785, + 0.452 + ], + "angle": 0, + "content": "72. Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers. In: ICCV (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.453, + 0.785, + 0.479 + ], + "angle": 0, + "content": "73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. CVPR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.48, + 0.785, + 0.507 + ], + "angle": 0, + "content": "74. Zhang, D., Choi, C., Park, I., Kim, Y.M.: Probabilistic Implicit Scene Completion. In: ICLR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.508, + 0.785, + 0.55 + ], + "angle": 0, + "content": "75. Zhang, H., Zhang, P., Hu, X., Chen, Y.C., Li, L.H., Dai, X., Wang, L., Yuan, L., Hwang, J.N., Gao, J.: GLIPv2: Unifying Localization and Vision-Language Understanding. CoRR (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.55, + 0.785, + 0.577 + ], + "angle": 0, + "content": "76. Zhang, P., Liu, W., Lei, Y., Lu, H., Yang, X.: Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion. In: ICCV (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.578, + 0.785, + 0.604 + ], + "angle": 0, + "content": "77. Zhao, H., Jiang, L., Jia, J., Torr, P.H., Koltun, V.: Point transformer. In: ICCV (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.605, + 0.785, + 0.633 + ], + "angle": 0, + "content": "78. Zhu, Y., Tian, Y., Mexatas, D., Dollar, P.: Semantic Amodal Segmentation. In: CVPR (2017)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.785, + 0.633 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_origin.pdf b/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..273cac5fcd07cc2a7287b4b180718c021a399d6b --- /dev/null +++ b/2024/Zero-Shot Multi-Object Scene Completion/72685078-1b9b-4a60-bb08-b29f03303447_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a0d00ba88df93bd239aaa94c0a7a38e6a8220c2449642b811326ee6ba216b4e +size 23616814 diff --git a/2024/Zero-Shot Multi-Object Scene Completion/full.md b/2024/Zero-Shot Multi-Object Scene Completion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3d44b17c61d75fc4843767cdefc888406dc88213 --- /dev/null +++ b/2024/Zero-Shot Multi-Object Scene Completion/full.md @@ -0,0 +1,372 @@ +# Zero-Shot Multi-Object Scene Completion + +Shun Iwase $^{1,2}$ , Katherine Liu $^{2}$ , Vitor Guizilini $^{2}$ , Adrien Gaidon $^{2}$ , Kris Kitani $^{1,\star}$ , Rares Ambrus $^{2,\star}$ , and Sergey Zakharov $^{2,\star}$ + +1 Carnegie Mellon University +$^{2}$ Toyota Research Institute + +![](images/94a420cf1808dd372b6b02b11ac2ae0db122c5606ced637ce65257c7c364fd75.jpg) +Fronr View + +![](images/27744617545a63b0e9081ea48a5034b506831f788138e5d5ecbbbd5303bdda21.jpg) + +![](images/c6ec8197a39ff9c9034c6c3ac15898c7aebbbcfd5cab7262a1c76cf46c44e041.jpg) + +![](images/b5f7ca3cdd94a16ad1d9bd92d76dd19e8baf495848a9d399e6b2b7d0934d137a.jpg) + +![](images/b8e9ef344ae3431f3ca52200e140a0872b2d61ad27673b321a0d0255be896c79.jpg) + +![](images/1e772d8508331de254e6ab1bcea06516f9f1a9e016958fd814dc1e11af54f086.jpg) + +![](images/cd8ec304519fbb25a27eabfdf46d78a28be15dacfdb2c29a76e895bd927ca750.jpg) +RGB-D Image +Fig. 1: Given an RGB-D image and the foreground mask of multiple objects not seen during training, our method predicts their complete 3D shapes quickly and accurately, including occluded areas. (Left) Synthetic image results. (Right) Zero-shot generalization to a real-world image of household objects with noisy depth data. Our 3D results are rotated with respect to the input to highlight completions in occluded regions. + +![](images/53ebc4921a5773cb85a71908e98f9b4b11808f0b94329d211b00b476febebabe.jpg) + +![](images/8db3e66a586be6def3c098d4d27244b7af8ea77e82d963a1ce7a617aae02824f.jpg) + +![](images/90e5f82e917c60fb23766ccfe9f170c45a30748dbc62fb797b09d94bec19c1f8.jpg) +Bae + +![](images/7da9167f0d2c15e995ac8f5dc7b1e35cfbc96f18458831f2f64a9768a3608345.jpg) +Completed 3D Shape + +![](images/7bf33584df2eb6171c35fc5acc6a5c34a3690bdb079222da6ffd289860951336.jpg) +Ground-Truth + +Abstract. We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image. Despite notable advancements in single-object 3D shape completion, high-quality reconstructions in highly cluttered real-world multi-object scenes remains a challenge. To address this issue, we propose OctMAE, an architecture that leverages an Octree U-Net and a latent 3D MAE to achieve high-quality and near real-time multi-object scene completion through both local and global geometric reasoning. Because a naive 3D MAE can be computationally intractable and memory intensive even in the latent space, we introduce a novel occlusion masking strategy and adopt 3D rotary embeddings, which significantly improve the runtime and scene completion quality. To generalize to a wide range of objects in diverse scenes, we create a large-scale photorealistic dataset, featuring a diverse set of 12K 3D object models from the Objaverse dataset that are rendered in multi-object scenes with physics-based positioning. Our method outperforms the current state-of-the-art on both synthetic and real-world datasets and demonstrates a strong zero-shot capability. https://sh8.io/#/oct_mae + +![](images/83a8d659290df93065f3d0a08b3edc2f16723f2b5fb98b0b9732e1bd20667dbb.jpg) +Fig. 2: Overview of our proposed method (OctMAE). Given an input RGB Image $\mathbf{I}$ , depth map $\mathbf{D}$ , and a foreground mask $\mathbf{M}$ , the octree feature $\mathbf{F}$ is obtained by unprojecting an image feature encoded by a pre-trained image encoder $\mathbf{E}$ . The octree feature is then encoded by the Octree encoder and downsampled to the Level of Detail (LoD) of 5. The notation LoD- $h$ indicates that each axis of the voxel grid has resolution of $2^h$ . The latent 3D MAE takes the encoded Octree feature $\mathbf{F}$ as input and its output feature is concatenated with the occlusion mask tokens $\mathbf{T}$ . Next, the masked decoded feature $\mathbf{F}_{ML}$ is computed by sparse 3D MAE decoder. Finally, the Octree decoder predicts a completed surface at LoD-9. + +# 1 Introduction + +Humans can instantly imagine complete shapes of multiple novel objects in a cluttered scene via advanced geometric and semantic reasoning. This ability is also essential for robots if they are to effectively perform useful tasks in the real world [26, 27, 46, 60]. In this work, we propose a method that can quickly and accurately complete a wide number of objects in diverse real-world scenes. + +Prior works [31, 34, 36, 43, 47, 71] have achieved phenomenal progress in scene and object shape completion from a single RGB-D image. Object-centric methods [17, 25] in particular can achieve very high reconstruction accuracy by relying on category-specific shape priors. However, when deployed on entire scenes such methods require bespoke instance detection/segmentation models, and often perform test-time optimization which is time consuming and would hinder real-time deployment on a robot. Moreover, existing methods are typically limited to a small set of categories. Thus, zero-shot multi-object scene completion remains a challenging and open problem that has seen little success to date. This is in stark contrast to the sudden increase in powerful algorithms for 2D computer vision tasks such as object detection [33, 75] and image segmentation [35, 70]. We attribute this progress to a great extent to the availability of large-scale datasets [8, 54] coupled with neural architectures and learning objectives [22, 50, 53, 57] that can effectively exploit the highly structured data occurring in the natural world [20]. + +Taking inspiration from the latest developments in the 2D domain, we propose a scene completion algorithm at the scene level that generalizes across a large number of shapes and that only supposes an RGB-D image and foreground mask as input. Our method consists of Octree masked autoencoders (OctMAE) — a hybrid architecture of Octree U-Net and a latent 3D MAE (Figure 2). Although a recent work, VoxFormer [34], also extends MAE architecture to 3D + +using deformable 3D attention and shows great improvement in semantic scene completion tasks, its memory utilization is still prohibitive to handle a higher resolution voxel grid. We address this issue by integrating 3D MAE into the latent space of Octree U-Net. Our experiments show that the latent 3D MAE is the key to global structure understanding and leads to strong performance and generalization across all datasets. Moreover, we find that the choice of a masking strategy and 3D positional embeddings is crucial to achieve better performance. We provide extensive ablations to verify that our 3D latent MAE design is effective. + +Our second contribution consists of the creation of a novel synthetic dataset to counteract the lack of large-scale and diverse 3D datasets. The dataset contains 12K 3D models of hand-held objects from Objaverse [12] and GSO [16] datasets (Figure 3). We utilize the dataset to conduct a comprehensive evaluation of our method as well as other baselines and show that our method scales and achieves better results. Finally, we perform zero-shot evaluations on synthetic as well as real datasets and show that a combination of 3D diversity coupled with an appropriate architecture is key to generalizable scene completion in the wild. + +Our contributions can be summarized as follows: + +- We present a novel network architecture, Octree Masked Autoencoders (OctMAE), a hybrid architecture of Octree U-Net and latent 3D MAE, which achieves state-of-the-art results on all the benchmarks. Further, we introduce a simple occlusion masking strategy with full attention, which boosts the performance of a latent 3D MAE. + +- We create the first large-scale and diverse synthetic dataset using Objaverse [12] dataset for zero-shot multi-object scene completion, and provide a wide range of benchmark and analysis. + +# 2 Related Work + +3D reconstruction and completion. Reconstructing indoor scenes and objects from a noisy point cloud has been widely explored [1, 2, 4, 6, 9, 10, 23, 24, 34, 40, 42, 47, 48, 56, 65, 66]. Several works [4, 5, 43, 44, 47, 58, 60, 63, 71, 72, 74, 76] tackle more challenging shape completion tasks where large parts of a target is missing. While these methods achieve impressive results, they do not explicitly consider semantic information, which may limit their capability for accurate shape completion. Recent methods [31, 32, 34, 76] in Semantic Scene Completion (SSC) leverage semantic information via an RGB image. Nevertheless, the number of target categories is quite limited, restricting its utility for a broad range of applications in the real world. In addition, many methods adopt occupancy or SDF as an output representation, which necessitates post-processing such as the marching cubes [41] and sphere tracing to extract an explicit surface. As another direction, GeNVS [3], Zero-1-to-3 [39], and 3DiM [64] explore single-view 3D reconstruction via novel view synthesis. However, expensive test-time optimization is required. Recently, One-2-3-45 [38] and MCC [66] attempt to improve the generation speed, however, their runtime for multi-object scenes is still far from near + +real-time. Further, since these methods are object-centric, multiple objects in a single scene are not handled well due to the complicated geometric reasoning especially caused by occlusions by other objects. In this paper, we propose a general and near real-time framework for multi-object 3D scene completion in the wild using only an RGB-D image and foreground mask without expensive test-time optimization. + +Implicit 3D representations. Recently, various types of implicit 3D representation have become popular in 3D reconstruction and completion tasks. Early works [18,42,47] use a one-dimensional latent feature to represent a 3D shape as occupancy and SDF fields. Several works [31,48,58] employ voxels, groundplanes, and triplanes, demonstrating that the retention of geometric information using 3D CNNs enhances performance. Although the voxel representation typically performs well among these three, its cubic memory and computational costs make increasing resolution challenging. To mitigate this issue, sparse voxels [6,21,37,55,62] treat a 3D representation as a sparse set of structured points using the octree and hash table and perform convolutions only on non-empty voxels and its neighbors. Further, the high-resolution sparse voxel enables a direct prediction of a target surface. As another direction, [1,67,77] leverage point cloud. Nonetheless, an unstructured set of points can be non-uniformly distributed in the 3D space and requires running the k-NN algorithm at every operation. This aspect often renders point-based methods less appealing compared to the sparse voxel representation. Therefore, our method adopts an octree-based representation used in [62] for efficient training and direct surface prediction. + +Masked Autoencoders (MAE). Inspired by the success of ViTs [15, 73] and masked language modeling [14, 51], [22] demonstrates that masked autoencoders (MAE) with ViTs can learn powerful image representation by reconstructing masked images. To improve the efficiency and performance of MAE, ConvMAE [19] proposes a hybrid approach that performs masked autoencoding at the latent space of 2D CNN-based autoencoder network. Recently, VoxFormer [34] extends the MAE design to 3D for semantic scene completion using 3D deformable attention, and shows great improvement over previous works. However, it is not trivial to scale up the MAE architecture to a higher resolution voxel due to memory constraints. Motivated by ConvMAE [19] and OCNN [62], we propose an efficient OctMAE architecture using sparse 3D operations. + +# 3 Proposed Method + +Given an RGB image $\mathbf{I} \in \mathbb{R}^{H \times W \times 3}$ , depth map $\mathbf{D} \in \mathbb{R}^{H \times W}$ , and foreground mask $\mathbf{M} \in \mathbb{R}^{H \times W}$ containing all objects of interest, we aim to predict their complete 3D shapes quickly and accurately. Our framework first encodes an RGB image $\mathbf{I}$ with a pre-trained image encoder $E$ such as ResNeXt [69] and then lifts the resulting features up to 3D space using a depth map $\mathbf{D}$ and foreground mask + +$\mathbf{M}$ to acquire 3D point cloud features $\mathbf{F} \in \mathbb{R}^{N \times D}$ and its locations $\mathbf{P} \in \mathbb{R}^{N \times 3}$ (Section 3.1). Second, we convert the 3D features into an octree using the same algorithm used in [63] and pass it to OctMAE to predict a surface at each LoD (Section 3.2). The diagram of our method is visualized in Figure 2. + +# 3.1 Octree Feature Aggregation + +We adopt ResNeXt-50 [69] as an image encoder to obtain dense and robust image features $\mathbf{W} = E(\mathbf{I}) \in \mathbb{R}^{H \times W \times D}$ from an RGB image. The image features are unprojected into the 3D space using a depth image with $(\mathbf{F}, \mathbf{P}) = \pi^{-1}(\mathbf{W}, \mathbf{D}, \mathbf{M}, \mathbf{K})$ where a point cloud feature and its corresponding coordinates are represented as $\mathbf{F}$ and $\mathbf{P}$ . $\pi^{-1}$ unprojects the image features $\mathbf{W}$ to the camera coordinate system using a depth map $\mathbf{D}$ , foreground mask $\mathbf{M}$ , and an intrinsic matrix $\mathbf{K}$ . Next, we define an octree at the level of detail (LoD) of 9 $(512^3)$ with the grid and cell size being $1.28\mathrm{m}$ and $2.5\mathrm{mm}$ respectively, and use the point features to populate the voxel grid, averaging features when multiple points fall into the same voxel. Here, LoD- $h$ simply represents resolution of an octree. For instance, the voxel grid of LoD-9 has the maximum dimension of $2^9 = 512$ for each axis. An octree is represented as a set of 8 octants with features at non-empty regions; therefore, it is more memory-efficient than a dense voxel grid. The octree is centered around the z-axis in the camera coordinate system, and its front plane is aligned with the nearest point to the camera along with the z-axis. + +# 3.2 OctMAE: Octree Masked Autoencoders + +We design OctMAE which leverages Octree U-Net [62] and latent 3D MAE to achieve accurate and efficient zero-shot multi-object scene completion. Octree U-Net consists of multiple sparse 3D convolutional layers. While the Octree U-Net architecture can efficiently encode octree features to low resolution, only local regions are considered at each operation. On the contrary, 3D MAE can capture global object information which helps predict globally consistent 3D shapes. However, unlike an image, a dense voxel grid contains a prohibitive number of tokens even in the latent space, which makes it challenging to adopt an MAE architecture directly for 3D tasks. Recently, ConvMAE [19] proposed to leverage the advantages of both CNNs and MAE in 2D for efficient training. Nevertheless, a naïve extension of ConvMAE [19] to 3D also leads to prohibitive computational and memory costs. To address this issue, we propose a novel occlusion masking strategy and adopt 3D rotary embeddings, enabling efficient masked autoencoding in the latent space. + +Encoder architecture. The encoder of Octree U-Net [63] takes the octree feature at LoD-9 and computes a latent octree feature $\mathbf{F}_L\in \mathbb{R}^{N'\times D'}$ at LoD-5 where $N^{\prime}$ is the number of non-empty voxels and $D^{\prime}$ is the latent feature dimension. To incorporate global symmetric and object scale information which gives more cues about completed shapes, we use $S$ layers of the full self-attention + +Transformer blocks in the latent 3D MAE encoder. Since $N'$ is typically the order of the hundreds to thousands, we resort to memory-efficient attention algorithms [11, 49]. Ideally, learnable relative positional encodings [77] are used to deal with the different alignments of point cloud features inside an octree. However, it requires computing the one-to-one relative positional encoding $N' \times N'$ times, which largely slows down the training and makes it computationally impractical. Therefore, we use RoPE [59] to encode 3D axial information between voxels. Concretely, we embed position information with RoPE at every multi-head attention layer as + +$$ +\mathbf {R} _ {i} = \operatorname {d i a g} \left(R (p _ {i} ^ {x}), R (p _ {i} ^ {y}), R (p _ {i} ^ {z}), \mathbf {I}\right) \in \mathbb {R} ^ {D ^ {\prime} \times D ^ {\prime}}, \quad \mathbf {f} _ {i} ^ {\prime} = \mathbf {R} _ {i} \mathbf {f} _ {i}, \tag {1} +$$ + +where $\mathbf{f}_i\in \mathbb{R}^{D'}$ , and $\mathbf{p}_i\in \mathbb{R}^3$ is $i$ -th octree feature and coordinates. $R:\mathbb{R}\to \mathbb{R}^{\left[D' / 3\right]\times \left[D' / 3\right]}$ is a function to generate a rotation matrix given normalized 1D axial coordinate. The detailed derivation of $\mathbf{R}$ can be found in the supplemental. + +Occlusion masking. Next, we concatenate mask tokens $\mathbf{T} \in \mathbb{R}^{M \times D'}$ to the encoded latent octree feature where $M$ is the number of the mask tokens. Note that each of the mask tokens has identical learnable parameters. The key question is how to place them in 3D space. Although previous methods [34] put mask tokens inside all the empty cells of a dense voxel grid, it is unlikely that visible regions extending from the camera to the input depth are occupied unless the error of a depth map is enormous. Further, this dense masking strategy forces us to use a local attention mechanism such as deformable 3D attention used in VoxFormer [34], due to the highly expensive memory and computational cost. To address this issue, we introduce an occlusion masking strategy in which the mask tokens $\mathbf{T}$ are placed only into occluded voxels. Concretely, we perform depth testing on every voxel within a voxel grid to determine if they are positioned behind objects. Mask tokens are assigned to their respective locations only after passing this test. The proposed occlusion masking strategy and efficient positional encoding enable our latent 3D MAE (Figure 4) to leverage full attention instead of local attention. + +Decoder architecture. The masked octree feature is given to the latent 3D MAE decoder which consists of $S$ layers of the full cross-attention Transformer blocks with RoPE [59] to learn global reasoning including occluded regions. Finally, the decoder of Octree U-Net takes the mixed latent octree feature of the Transformer decoder $\mathbf{F}_{ML} \in \mathbb{R}^{(N' + M) \times D'}$ as input and upsamples features with skip connections. The decoded feature is passed to a two-layer MLP which estimates an occupancy at LoD- $h$ . In addition, normals and SDF values are predicted only at the final LoD. To avoid unnecessary computation, we prune grid cells predicted as empty with a threshold of 0.5 at every LoD, following [63]. + +# 3.3 Training Details and Loss Functions + +We use all surface points extracted through OpenVDB [45] during training. The loss function is defined as + +![](images/254af92f7fe9ab95a825ffe3eb45f3b6340a6ccf883620be06f5bcf4aa03be21.jpg) + +![](images/f1c799d803e65d3317e1324084f35b6c34721ff2fce61c499e62e183cb85b7ee.jpg) +Fig. 3: Example images of our synthetic dataset. We use BlenderProc [13] to acquire high-quality images under various and realistic illumination conditions. + +![](images/5b52f458e72f4c1db3b0873d06af7516389fcc807343932638f2300d8ef5194a.jpg) + +![](images/5d896ae26d44b8cf5da77e0d79f588c56a38f3659384b04219491c65b2024994.jpg) + +![](images/e558342093bcec274aa64a6636e511ec08d0df4361c422088eb1976edc65f090.jpg) + +![](images/564af605784a04fef25391624c35a2e6c9c1b5c5e50f40bee122a3548ba40320.jpg) +Fig.4: Overall architecture of Latent 3D MAE. + +![](images/fa58f1958eafc4f6616d405d511fb81f1b2cfe13ed3cbe664681f7a35857559a.jpg) + +Table 1: Dataset comparisons. We create the first large-scale and diverse 3D scene completion dataset for novel multiple objects using a subset of 3D models from Objverse dataset [12]. The number of categories is reported by using the LVIS categories, and $R^{\mathrm{LVIS}}(\%)$ represents a ratio of the number of the categories covered by the dataset. $\dagger$ denotes the number of objects with actual size. + +
DatasetType3D +Models# +Frames# +Objs# +CatsR^LVIS(%)
YCB-V [68]Real133K2150.4
HB [28]Real17K33131.0
HOPE [36]Real2K2830.3
CO3D V2 [52]Real6M40K504.2
MegaPose [30]Synthetic1M1K†170.9
OursSynthetic1M12K60150.0
+ +$$ +\mathcal {L} = \mathcal {L} _ {n r m} + \mathcal {L} _ {S D F} + \sum_ {h \in \{5, 6, 7, 8, 9 \}} \mathcal {L} _ {o c c} ^ {h}, \tag {2} +$$ + +where $\mathcal{L}_{nrm}$ and $\mathcal{L}_{SDF}$ measure the averaged L2 norm of normals and SDF values. $\mathcal{L}_{occ}^{h}$ computes a mean of binary cross entropy function of each LoD-h. + +# 4 Dataset + +As shown in Table 1, existing datasets are limited in the diversity of object categories. Although the CO3D V2 dataset [52] contains data for $40\mathrm{k}$ objects, because the provided ground-truth 3D shapes are reconstructed from unposed multi-view images, they tend to be highly noisy and parts of the object missing due to lack of visibility. To tackle this problem, we leverage Objaverse [12], a large-scale 1M 3D object dataset containing 46k objects with LVIS category annotations. To focus on completion of hand-held objects, we select 601 categories and ensure that the largest dimension of the objects in each category + +falls approximately within the range of $4\mathrm{cm}$ to $40~\mathrm{cm}$ . In addition, for high-quality rendering, we omit objects that lack textures, contain more than 10,000 vertices, or are articulated. To increase the number of objects, we add objects from Google Scanned Objects (GSO) [16], which results in 12,655 objects in total. We render 1M images of 25,000 scenes using physics-based rendering and positioning via BlenderProc [13] to simulate realistic scenes (Figure 3). For each image, we randomly choose a camera view such that at least one object is within the camera frame. We also generate 1,000 images using 250 withheld objects for evaluation. + +# 5 Experimental Results + +Implementation details. We train all the models for 2 epochs using the Adam [29] optimizer with a learning rate of 0.002 and batch size of 16 on NVIDIA A100. Note that the models are only trained on the synthetic dataset introduced in Section 4. In addition, the number of Transformer blocks $K$ , the feature dimension $D$ , and $D'$ are set to 3, 32, and 192 respectively. We use a pretrained model of ResNeXt-50 [69] as an image encoder for all the experiments. The ground-truth occupancy, SDF and normals are computed from meshes with OpenVDB [45]. During training, we dilate ground-truth masks using the radius randomly selected from 1, 3 and 5 pixels to deal with the segmentation error around the object edges. During evaluation, we use ground-truth masks provided by the datasets. + +Evaluation metrics. We report Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) to evaluate the quality of a completed surface. For surface-based methods, we use a predicted surface directly for evaluation. For the methods that predict occupancy, the marching cubes algorithm [41] is used to extract a surface and uniformly sample 100,000 points from its surface such that the number of points are roughly equal to the surface prediction methods. We use mm as a unit for all the reported metrics. + +Evaluation datasets. We evaluate the baselines and our model on one synthetic and three real-world datasets. For the synthetic dataset, we render 1,000 images using textured 3D scans from Objaverse [12], following the same procedure described in Section 4. We randomly choose 3 to 5 objects per image from the withheld objects for Objavese dataset. Since these 3D scans are relatively more complex than the objects seen in the real-world datasets we use, they can provide a good scene completion quality estimate for complex objects. For the real-world dataset, we use the YCB-Video [68], HOPE [36] and HomebrewedDB (HB) [28] datasets. YCB-Video consists of 21 everyday objects with diverse shapes. HOPE contains 28 simple household objects with mostly rectangular and cylindrical everyday shapes, and the images are captured in various lighting conditions in indoor scenes using a RealSense D415 RGBD camera. HB includes 33 objects (e.g., toy, household, and industrial objects). Their images are taken by PrimeSense Carmine in lab-like environments. + +Table 2: Quantitative evaluation of multi-object scene completion on Ours, YCB-Video [68], HOPE [36], and HomebrewedDB [28] datasets. Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) are reported. Chamfer distance is reported in the unit of mm. + +
Method3D Rep.SyntheticReal
OursYCB-Video [68]HB [28]HOPE [36]
CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑
VoxFormer [34]Dense44.540.3820.65330.320.4380.64134.840.3660.60847.750.323
ShapeFormer [71]Dense39.500.4010.59338.210.3850.58840.930.3280.59439.540.306
MCC [66]Implicit43.370.4590.70035.850.2890.60819.590.3710.65517.530.357
ConvONet [48]Dense23.680.5410.71032.870.4580.64926.710.5040.64320.950.581
POCO [1]Implicit21.110.6340.75315.450.5870.69913.170.6240.70913.200.602
AICNet [31]Dense15.640.5730.74112.260.5450.70211.870.5570.67411.400.564
Minkowski [6]Sparse11.470.7460.8028.040.7610.7178.810.7280.7198.560.734
OCNN [63]Sparse9.050.7820.8287.100.7780.7717.020.7920.7368.050.742
OursSparse6.480.8390.8486.400.8000.7856.140.8190.7706.970.803
+ +Baselines. As discussed in Secs. 1 and 2, multi-object scene completion from a single RGB-D image is relatively not explored due to the lack of large-scale and diverse multi-object scene completion datasets. We carefully choose baseline architectures that can support this task with simple or no adaptation. We focus on three primary method types from related fields. Firstly, we select Semantic Scene Completion (SSC) methods [6,31,34,63] that do not heavily rely on domain or categorical knowledge of indoor or outdoor scenes. Secondly, we opt for object shape completion methods [6,63,66,71] that can be extended to multi-object scene completion without an architectural modification and prohibitive memory utilization. Thirdly, we consider voxel or octree-based 3D reconstruction methods [1,6,48,63] that predict a complete and plausible shape using noisy and sparse point cloud data. For dense voxel-based (e.g., AICNet [31], ConvONet [48] and VoxFormer [34]) and sparse voxel-based methods (e.g., MinkowskiNet [6], OCNN [63], and our method), we use LoD-6 and LoD-9 as an input resolution respectively. All the experiments are conducted using the original implementation provided by the authors, with few simple modifications to adapt for multi-object scene completion and a fair comparison. For instance, we extend the baselines that take the point cloud as input by concatenating the image features to the point cloud features. For occupancy-based methods, though their output voxel grid resolution is LoD-6, we use trilinear interpolation to predict occupancy at LoD-7 [48]. For MinkowskiNet [6] and OCNN [62,63], we use the U-Net architecture with the depth of 5 (LoD-9 to LoD-4). We discuss further details about the baseline architectures, their modifications, and hyperparameters in the supplemental. + +# 5.1 Quantitative Results + +Table 2 shows that our method outperforms the baselines on all the metrics and datasets. Although our model is only trained on synthetic data, it demonstrates strong generalizability to real-world datasets. We also remark that our + +Table 3: Ablation Study of positional encoding on our synthetic dataset. We compare w/o positional encoding, conditional positional encoding (CPE) [7], absolute positional encoding (APE) used in [34], and RoPE [59]. + +
TypeCD↓F1↑NC↑
w/o11.320.7780.808
CPE [7]9.910.7850.811
APE [34]8.610.7820.825
RPE [61]7.810.8040.830
RoPE [59]6.480.8390.848
+ +Table 4: Ablation study on 3D attention algorithms. The scores are reported on the HOPE dataset [36]. + +
MethodOcc. MaskingCD↓F1↑Runtime↓
3D DSA [34]12.140.70393.3
Neighbor. Attn. [77]9.260.727130.8
Octree Attn. [61]7.990.752116.4
Neighbor. Attn. [77]8.810.759111.9
Octree Attn. [61]7.540.772105.3
Full + Self Attn.7.210.78586.2
Full + Cross Attn.6.970.80385.1
+ +method exhibits robustness to the noise characteristics present in depth data captured by typical RGB-D cameras despite being trained on noise-free depth data in simulation. The comparisons show that hierarchical structures and the latent 3D MAE are key to predicting 3D shapes of unseen objects more accurately than the baselines. Unlike our method, VoxFormer [34] uses an MAE with 3D deformable attention where only 8 neighbors of the reference points at the finest resolution are considered. Figure 8 also demonstrates that methods using a dense voxel grid or implicit representation fail to generalize to novel shapes. This implies that capturing a right choice of a network architecture is crucial to learn generalizable shape priors for zero-shot multi-object scene completion. Our method has the similar U-Net architecture used in MinkowskiNet [6] and OCNN [62] except we use the latent 3D MAE at LoD-5 instead of making the network deeper. This indicates that the latent 3D MAE can better approximate the shape distribution of the training dataset by leveraging an attention mechanism to capture global 3D contexts. Table 7 also confirms that our method achieves the best scene completion quality by measuring Chamfer distance in visible and occluded regions separately. + +Positional encoding. As shown in Table 3, we explore the effect of RoPE [59] on the validation set of our synthetic dataset. The first row shows that all the metrics significantly drop if positional encoding is not used. In addition, we test CPE [7], APE [34], and RPE [61] and obtain slightly better scores. CPE [7] is typically more effective than APE in tasks such as 3D instance/semantic segmentation and object detection where a complete 3D point cloud is given. However, this result highlights the challenge of capturing position information from mask tokens which initially have the identical parameters. Our method employs RoPE [59] for relative positional embedding. One of the important aspect of RoPE [59] is that it does not have any learnable parameters. Despite this, it demonstrates superior performance compared to other approaches. Although RoPE was originally proposed in the domain of natural language processing, our experiment reveals its effectiveness in multi-object 3D scene completion. + +Table 5: Ablation study of the number of MAE layers on our synthetic dataset. + +
#LayersCD↓F1↑NC↑Runtime↓
19.010.7840.82876.4
36.480.8390.84885.1
55.750.8500.85596.2
+ +Table 6: Ablation study of U-Net architectures on HomebrewedDB dataset [28]. + +
ArchitectureCD↓F1↑NC↑Runtime↓
Mink. U-Net [6]7.260.7880.74383.8
OctFormer [61]7.450.7560.728114.4
Octree U-Net [62]6.140.8190.77085.1
+ +Table 7: Comparisons of the runtime (ms). For reference, we also show Chamfer distance of visible $\mathrm{CD}_{vis}$ and occluded $\mathrm{CD}_{occ}$ regions on our synthetic dataset. + +
Method3D Rep.ResolutionCDvis↓CDocc↓CD↓Runtime↓
VoxFormer [34]Dense128318.2566.3244.5479.5
ShapeFormer [71]Dense128314.6163.3339.501.8 × 104
MCC [66]Implicit128315.3963.4144.379.1 × 103
ConvONet [48]Dense128317.0934.0923.6848.4
POCO [1]Implicit128310.3731.5521.11758.8
AICNet [31]Dense12839.9821.4315.6424.2
Minkowski [6]Sparse51237.1215.4411.4778.5
OCNN [63]Sparse51233.8712.169.0580.1
OursSparse51233.299.406.4885.1
+ +3D Attention algorithms. Table 4 reveals that occlusion masking yields better runtime and metrics than dense masking. Furthermore, our experiments suggest that full attention and Octree attention, both characterized by their wider receptive fields, are more effective compared to local attention algorithms such as 3D deformable self-attention (3D DSA) [34] and neighborhood attention [77]. + +Number of layers in 3D latent MAE. We further explore the design of 3D latent MAE in Table 5. Increasing the number of layers in 3D latent MAE improves the scene completion quality while making the runtime slower. Consequently, we select 3 layers for a good trade-off between the accuracy and runtime. + +U-Net architectures. In Table 6, we investigate U-Net architectures. The key difference of Minkowski U-Net [6] is the use of a sparse tensor as an underlying data structure instead of an octree, which gives a slightly better performance than Octree U-Net [62]. OctFormer [61] proposes an octree-based window attention mechanism using the 3D Z-order curve to support a much larger kernel size than Octree U-Net. In general, a wider range of an effective receptive field helps achieve better performance. Nonetheless, OctFormer achieves a chamfer distance and F-1 score of 7.45 and 0.756, which is worse than Octree U-Net by 1.31 and 0.063 respectively. This indicates that the OctFormer's attention mechanism is less effective compared to an Octree U-Net architecture especially in the presence of latent 3D MAE, playing the similar role in the latent space. + +![](images/0ea8058eb04e3267fa43da6898c2601022d7752e72059663961b789bb480b805.jpg) +Fig.5: Scaling of the metrics with the number of objects in a training dataset. We conduct the experiments by changing the ratio of the number of objects to $1\%$ , $5\%$ , $10\%$ , $20\%$ , $40\%$ , $60\%$ , $80\%$ , and $100\%$ . + +![](images/2474a71bdc02fc05ba02541364e6fc70303c573314fefffa795613c970d1b654.jpg) + +![](images/5dc8a3b8237d0a94607e2e369779e89117c75037db80efafaf8eae870110fd99.jpg) +Ground-Truth + +![](images/34f195f4418fedd1ea815c7123ea9b562466bae0fa5d8af4243f0a7b47d5751f.jpg) + +![](images/2f7e43ce1d961af861f0345d12344dca8a5858d6d94221da8fb1c74fa5252874.jpg) +OCNN + +![](images/e507df4a2516c9cd1bd320da647d1ffef8ac43b5b3c53450392434553f7b50fb.jpg) + +![](images/dd06103c1c8d8a83d0c8f8614d04606ed9db886a1cab32487d72f0d5f67cd520.jpg) +Ours +Fig.6: Qualitative comparison of OCNN [62] and our method. Our proposed latent 3D MAE helps predict globally consistent scene completion. + +Runtime analysis. Table 7 shows the runtime performance of the baselines and our method. For a fair comparison, we run inference over the 50 samples of the HOPE dataset and report the average time. For occupancy-based methods, we predict occupancy on object surfaces and occluded regions. Due to the memory-intensive nature of MCC [1]'s Transformer architecture, we run inference multiple times with the maximum chunk size of 10,000 points. Our experiments demonstrate that implicit 3D representations used in POCO [1] and MCC [66] become slower when the voxel grid resolution is higher. Further, an autoregressive Transformer adopted in ShapeFormer [71] greatly increases the runtime. Conversely, the methods which leverage sparse voxel grids (e.g., MinkowskiNet [6], OCNN [63], and Ours) achieve much faster runtime thanks to efficient sparse 3D convolutions, and hierarchical pruning on predicted surfaces. Our method offers runtimes comparable to the fastest method, while implementing attention operations over the scene via latent 3D MAE, and achieving superior reconstruction. + +Dataset scale analysis. To assess the importance of the large-scale 3D scene completion datasets, we train our model on splits of increasing sizes which contain $1\%$ , $5\%$ , $10\%$ , $20\%$ , $40\%$ , $60\%$ , $80\%$ , and $100\%$ of the total number of the objects in our dataset. We report metrics on the test split of our dataset. Section 5.1 shows that all the metrics have a strong correlation with respect to the number of objects. This could imply that the model benefits significantly from increased data diversity and volume, enhancing its ability to understand and complete 3D shapes. We believe that this analysis is crucial for understanding the relationship between data quantity and model performance. + +# 5.2 Qualitative Results + +Figure 7 shows the qualitative results of our method on both of the synthetic and real-world datasets from three different views. Unlike the synthetic dataset, + +![](images/68a83c039992abb04eb3d78f674a28b9fccb0af667d968a9aa73bdb28b91f872.jpg) + +![](images/01ce50157c218acb1301490615ef0f915e231d06642dd389ebbd82c82e0c256c.jpg) + +![](images/036481e9cd8effdea48a8e68f7cfce44b696d491db46f578024ae6a3a4d5d2f1.jpg) + +![](images/d858976c0c5a57048b024d4b7768de082890337c20df17bba6ad4ae2752a03e7.jpg) +RGB-D Image + +![](images/c7200a6ea34c51c535371f63cda2879f9c517d2d04c2230d90062b23965c2403.jpg) + +![](images/ca9869829b6773ae55b535704b2635b1ebf296b0fdec90f60156f634afaddbc0.jpg) + +![](images/518b9993aa8ebe75527c8f9494e8a80d0eafcd45d0fe6003bb987e57380f3e04.jpg) + +![](images/6c51f99cb97c2014c23d02232645bb2e6ffb539c68c5be0b49b497bdd331d377.jpg) +View 1 + +![](images/3d72b948acf1587fd75a76a692b9b25db82fd06749d399a8b7e427e1af9e7c19.jpg) + +![](images/a3b4ca23998ce6ef68a44ab08bd540b72255d22d63213855003866575149a511.jpg) + +![](images/2223b06631d7580a1c5de299cd65002bc095b84453dbf4f6e404955be2dce6d0.jpg) + +![](images/d55eee2806625fa3fbeb381cd4bb873b4824c3ae7e186fc9ffb5db988a8fff80.jpg) +View 2 + +![](images/51e65fde5c3cbb19d75241cfaec188b9f0d6c894f3bb7c9cd91bd36b2e84d9b4.jpg) + +![](images/03f4fdf2e8f710addeeb3cc5c4924f80f2e3f08d89f0017e6554d39dcde3990c.jpg) + +![](images/97116a6ba9463180012fefc9680cae23c72aefcb257436054407d5fb3f49e5f7.jpg) + +![](images/5e8d8d94e6bee9c77129d0820f2ef01e17968d3179342ace79692f4f3c0cdd02.jpg) +View 3 + +![](images/4b7f2cead40e9f68e4f060e5ff915c70b00a747aec0c0927cb960e492057ace5.jpg) +# + +![](images/b692ea033b9a7e91ee6399906c496aadf61fd1eb036753aef081ffcb48f04493.jpg) + +![](images/aa803c1b8322aa9397dc510957594aea7516abd3055ba4f28eed97fb8089efb6.jpg) + +![](images/5896208645b37073aab3efe314c3f400ac8e6b036116d03a0fdbb97651b7af0b.jpg) +RGB-D Image + +![](images/fad6cc8d5f1c5700fd622d588e2b614926cbe42cb1b623c8d13174cb42d62cbd.jpg) +. + +![](images/0b0ad484f0577e985976a8e235b65620392b5c6d7eacfd0440e964eddc7a4a7e.jpg) + +![](images/8953cd4824f494189fcacdb318a2ff28443f067bcb257672b7e705f311e2e278.jpg) + +![](images/6ed528fcf3154bfea2afaaf73f86329db5ee40bba03c583f6908170f3eee21d8.jpg) +View 1 + +![](images/9deb22b8382a47f89f61fc48da8a6635bc4ccfbf8b0fc1884869c86b6bfb9d1a.jpg) + +![](images/0aeb28b1ba28af13db394d47e3a4dfc0205173c011559e05bbdf63953a621248.jpg) + +![](images/86a2388cacd2e8b790626a9c2f4067cd7010e98bb4b617ccea2dc1eaa1bb8da2.jpg) + +![](images/0333414c5c0ff42decbae0b3ed611615caa88e20cf5585a7bde0a9db8ae22618.jpg) +View 2 + +tation methods to obtain instance-level completed shapes. Third, our method does not handle uncertainty of surface prediction explicitly. In future work, we plan to extend our method to model uncertainty to improve the scene completion quality and diversity. + +![](images/46bb813236a96528212b701e87d023be165550cc2ab3ec2f57f5f3c7ac365784.jpg) +Fig. 8: Comparisons on HomebrewedDB dataset (Top), and HOPE (Bottom) datasets. For better visibility, we show the generated and ground truth shapes. The top and bottom rows show an image from near camera and back views respectively. Compared to the other methods, our method predicts accurate and consistent shapes on a challenging scene completion task for novel objects. + +# Acknowledgment + +We thank Zubair Irshad and Jenny Nan for valuable feedback and comments. + +This research is supported by Toyota Research Institute. + +# References + +1. Boulch, A., Marlet, R.: POCO: Point Convolution for Surface Reconstruction. In: CVPR (2022) +2. Bozic, A., Palafox, P., Thies, J., Dai, A., Nießner, M.: TransformerFusion: Monocular rgb scene reconstruction using transformers. In: NeurIPS (2021) +3. Chan, E.R., Nagano, K., Chan, M.A., Bergman, A.W., Park, J.J., Levy, A., Aittala, M., Mello, S.D., Karras, T., Wetzstein, G.: GeNVS: Generative novel view synthesis with 3D-aware diffusion models. In: CoRR (2023) +4. Chen, H.X., Huang, J., Mu, T.J., Hu, S.M.: CIRCLE: Convolutional Implicit Reconstruction And Completion For Large-Scale Indoor Scene. In: ECCV (2022) +5. Cheng, Y.C., Lee, H.Y., Tulyakov, S., Schwing, A.G., Gui, L.Y.: SDFusion: Multimodal 3d shape completion, reconstruction, and generation. In: CVPR (2023) +6. Choy, C., Gwak, J., Savarese, S.: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. In: CVPR (2019) +7. Chu, X., Tian, Z., Zhang, B., Wang, X., Shen, C.: Conditional Positional Encodings for Vision Transformers. In: ICLR (2023) +8. Computer, T.: RedPajama: an Open Dataset for Training Large Language Models (2023) +9. Dai, A., Diller, C., Nießner, M.: SG-NN: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. In: CVPR (2020) +10. Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., Nießner, M.: ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In: CVPR (2018) +1. Dao, T.: FlashAttention-2: Faster attention with better parallelism and work partitioning (2023) +2. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A Universe of Annotated 3D Objects. CVPR (2022) +3. Denninger, M., Winkelbauer, D., Sundermeyer, M., Boerdijk, W., Knauer, M., Strobl, K.H., Humt, M., Triebel, R.: BlenderProc2: A Procedural Pipeline for Photorealistic Rendering. Journal of Open Source Software (2023) +4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: NAACL (2019) +5. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR (2021) +6. Downs, L., Francis, A., Koenig, N., Kinman, B., Hickman, R., Reymann, K., McHugh, T.B., Vanhoucke, V.: Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items. In: ICRA (2022) +7. Duan, Y., Zhu, H., Wang, H., Yi, L., Nevatia, R., Guibas, L.J.: Curriculum deepsdf. In: ECCV (2020) + +18. Dupont, E., Kim, H., Eslami, S.M.A., Rezende, D.J., Rosenbaum, D.: From data to functa: Your data point is a function and you can treat it like one. In: ICML (2022) +19. Gao, P., Ma, T., Li, H., Dai, J., Qiao, Y.: ConvMAE: Masked Convolution Meets Masked Autoencoders. NeurIPS (2022) +20. Goldblum, M., Finzi, M., Rowan, K., Wilson, A.G.: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning. CoRR (2023) +21. Graham, B., Engelcke, M., van der Maaten, L.: 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. CVPR (2018) +22. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: CVPR (2022) +23. Hou, J., Dai, A., Nießner, M.: RevealNet: Seeing Behind Objects in RGB-D Scans. In: CVPR (2020) +24. Huang, J., Gojcic, Z., Atzmon, M., Litany, O., Fidler, S., Williams, F.: Neural Kernel Surface Reconstruction. In: CVPR (2023) +25. Irshad, M.Z., Zakharov, S., Ambrus, R., Kollar, T., Kira, Z., Gaidon, A.: Shapo: Implicit representations for multi-object shape, appearance, and pose optimization. In: ECCV (2022) +26. Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.: Real-time Perception meets Reactive Motion Generation. RA-L (2018) +27. Karaman, S., Frazzoli, E.: Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Rob. Res. (2011) +28. Kaskman, R., Zakharov, S., Shugurov, I., Ilic, S.: HomebrewedDB: RGB-D Dataset for 6D Pose Estimation of 3D Objects. ICCVW (2019) +29. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015) +30. Labbé, Y., Manuelli, L., Mousavian, A., Tyree, S., Birchfield, S., Tremblay, J., Carpentier, J., Aubry, M., Fox, D., Sivic, J.: MegaPose: 6d pose estimation of novel objects via render & compare. In: CoRL (2022) +31. Li, J., Han, K., Wang, P., Liu, Y., Yuan, X.: Anisotropic Convolutional Networks for 3D Semantic Scene Completion. In: CVPR (2020) +32. Li, J., Liu, Y., Gong, D., Shi, Q., Yuan, X., Zhao, C., Reid, I.: RGBD Based Dimensional Decomposition Residual Network for 3D Semantic Scene Completion. In: CVPR. pp. 7693-7702 (June 2019) +33. Li*, L.H., Zhang*, P., Zhang*, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.N., Chang, K.W., Gao, J.: Grounded language-image pretraining. In: CVPR (2022) +34. Li, Y., Yu, Z., Choy, C., Xiao, C., Alvarez, J.M., Fidler, S., Feng, C., Anandkumar, A.: VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion. In: CVPR (2023) +35. Liang, F., Wu, B., Dai, X., Li, K., Zhao, Y., Zhang, H., Zhang, P., Vajda, P., Marculescu, D.: Open-vocabulary semantic segmentation with mask-adapted clip. In: CVPR (2023) +36. Lin, Y., Tremblay, J., Tyree, S., Vela, P.A., Birchfield, S.: Multi-view Fusion for Multi-level Robotic Scene Understanding. In: IROS (2021) +37. Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural Sparse Voxel Fields. NeurIPS (2020) +38. Liu, M., Xu, C., Jin, H., Chen, L., Xu, Z., Su, H., et al.: One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. NeurIPS (2023) + +39. Liu, R., Wu, R., Hoorick, B.V., Tokmakov, P., Zakharov, S., Vondrick, C.: Zero-1-to-3: Zero-shot One Image to 3D Object. In: CVPR (2023) +40. Liu, Z., Feng, Y., Black, M.J., Nowrouzezahrai, D., Paull, L., Liu, W.: MeshDiffusion: Score-based Generative 3D Mesh Modeling. In: ICLR (2023) +41. Lorensen, W.E., Cline, H.E.: Marching Cubes: A High Resolution 3D Surface Construction Algorithm. SIGGRAPH (1987) +42. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy Networks: Learning 3D Reconstruction in Function Space. In: CVPR (2019) +43. Mittal, P., Cheng, Y.C., Singh, M., Tulsiani, S.: AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation. In: CVPR (2022) +44. Mohammadi, S.S., Duarte, N.F., Dimou, D., Wang, Y., Taiana, M., Morerio, P., Dehban, A., Moreno, P., Bernardino, A., Del Bue, A., Santos-Victor, J.: 3DSGrasp: 3D Shape-Completion for Robotic Grasp. In: ICRA (2023) +45. Museth, K.: VDB: High-resolution sparse volumes with dynamic topology (2013) +46. Okumura, K., Défago, X.: Quick Multi-Robot Motion Planning by Combining Sampling and Search. In: IJCAI (2023) +47. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In: CVPR (2019) +48. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional Occupancy Networks. In: ECCV (2020) +49. Rabe, M.N., Staats, C.: Self-attention Does Not Need $O(n^{2})$ Memory (2021) +50. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021) +51. Radford, A., Narasimhan, K.: Improving Language Understanding by Generative Pre-Training (2018) +52. Reizenstein, J., Shapovalov, R., Henzler, P., Sbordone, L., Labatut, P., Novotny, D.: Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction. In: ICCV (2021) +53. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-Resolution Image Synthesis with Latent Diffusion Models (2021) +54. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortzman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS (2022) +55. Shao, T., Yang, Y., Weng, Y., Hou, Q., Zhou, K.: H-CNN: Spatial Hashing Based CNN for 3D Shape Analysis. TVCG (2020) +56. Shen, T., Gao, J., Yin, K., Liu, M.Y., Fidler, S.: Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis. In: NeurIPS (2021) +57. Shi, Z., Zhou, X., Qiu, X., Zhu, X.: Improving image captioning with better use of captions. CoRR (2020) +58. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic Scene Completion from a Single Depth Image. CVPR (2017) +59. Su, J., Lu, Y., Pan, S., Wen, B., Liu, Y.: RoFormer: Enhanced Transformer with Rotary Position Embedding. In: ICLR (2020) +60. Varley, J., DeChant, C., Richardson, A., Ruales, J., Allen, P.: Shape completion enabled robotic grasping. In: IROS (2017) +61. Wang, P.S.: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH (2023) +62. Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: Octree-Based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH (2017) + +63. Wang, P.S., Liu, Y., Tong, X.: Deep Octree-based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion. In: CVPRW (2020) +64. Watson, D., Chan, W., Martin-Brualla, R., Ho, J., Tagliasacchi, A., Norouzi, M.: Novel View Synthesis with Diffusion Models. CoRR (2022) +65. Williams, F., Gojcic, Z., Khamis, S., Zorin, D., Bruna, J., Fidler, S., Litany, O.: Neural Fields as Learnable Kernels for 3D Reconstruction. In: CVPR (2022) +66. Wu, C.Y., Johnson, J., Malik, J., Feichtenhofer, C., Gkioxari, G.: Multiview Compressive Coding for 3D Reconstruction. In: CVPR (2023) +67. Wu, X., Lao, Y., Jiang, L., Liu, X., Zhao, H.: Point transformer V2: Grouped Vector Attention and Partition-based Pooling. In: NeurIPS (2022) +68. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes (2018) +69. Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated Residual Transformations for Deep Neural Networks. CVPR (2017) +70. Xu, J., Liu, S., Vahdat, A., Byeon, W., Wang, X., De Mello, S.: ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models. CVPR (2023) +71. Yan, X., Lin, L., Mitra, N.J., Lischinski, D., Cohen-Or, D., Huang, H.: Shape-Former: Transformer-based Shape Completion via Sparse Representation. In: CVPR (2022) +72. Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers. In: ICCV (2021) +73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. CVPR (2022) +74. Zhang, D., Choi, C., Park, I., Kim, Y.M.: Probabilistic Implicit Scene Completion. In: ICLR (2022) +75. Zhang, H., Zhang, P., Hu, X., Chen, Y.C., Li, L.H., Dai, X., Wang, L., Yuan, L., Hwang, J.N., Gao, J.: GLIPv2: Unifying Localization and Vision-Language Understanding. CoRR (2022) +76. Zhang, P., Liu, W., Lei, Y., Lu, H., Yang, X.: Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion. In: ICCV (2019) +77. Zhao, H., Jiang, L., Jia, J., Torr, P.H., Koltun, V.: Point transformer. In: ICCV (2021) +78. Zhu, Y., Tian, Y., Mexatas, D., Dollar, P.: Semantic Amodal Segmentation. In: CVPR (2017) \ No newline at end of file diff --git a/2024/Zero-Shot Multi-Object Scene Completion/images.zip b/2024/Zero-Shot Multi-Object Scene Completion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fe73c40801cbaa0d5692a4d36ff68c3142d38e5b --- /dev/null +++ b/2024/Zero-Shot Multi-Object Scene Completion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59be396b88927d9afc2ae6de79729c162f24eea3d8ce188532c20a71eb1472e5 +size 648718 diff --git a/2024/Zero-Shot Multi-Object Scene Completion/layout.json b/2024/Zero-Shot Multi-Object Scene Completion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f43e0c070a0b74ac523581533883e40b7bdddc1c --- /dev/null +++ b/2024/Zero-Shot Multi-Object Scene Completion/layout.json @@ -0,0 +1,11778 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 159, + 112, + 454, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 112, + 454, + 129 + ], + "spans": [ + { + "bbox": [ + 159, + 112, + 454, + 129 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "spans": [ + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": "Shun Iwase" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": ", Katherine Liu" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": ", Vitor Guizilini" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": ", Adrien Gaidon" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": ", Kris Kitani" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{1,\\star}" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": ", Rares Ambrus" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{2,\\star}" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "text", + "content": ", and Sergey Zakharov" + }, + { + "bbox": [ + 164, + 150, + 449, + 175 + ], + "type": "inline_equation", + "content": "^{2,\\star}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 246, + 184, + 367, + 206 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 246, + 184, + 367, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 184, + 367, + 196 + ], + "spans": [ + { + "bbox": [ + 246, + 184, + 367, + 196 + ], + "type": "text", + "content": "1 Carnegie Mellon University" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 249, + 196, + 364, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 249, + 196, + 364, + 206 + ], + "spans": [ + { + "bbox": [ + 249, + 196, + 364, + 206 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 249, + 196, + 364, + 206 + ], + "type": "text", + "content": " Toyota Research Institute" + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 133, + 234, + 187, + 277 + ], + "blocks": [ + { + "bbox": [ + 133, + 234, + 187, + 277 + ], + "lines": [ + { + "bbox": [ + 133, + 234, + 187, + 277 + ], + "spans": [ + { + "bbox": [ + 133, + 234, + 187, + 277 + ], + "type": "image", + "image_path": "94a420cf1808dd372b6b02b11ac2ae0db122c5606ced637ce65257c7c364fd75.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 188, + 243, + 196, + 270 + ], + "lines": [ + { + "bbox": [ + 188, + 243, + 196, + 270 + ], + "spans": [ + { + "bbox": [ + 188, + 243, + 196, + 270 + ], + "type": "text", + "content": "Fronr View" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 198, + 235, + 253, + 279 + ], + "blocks": [ + { + "bbox": [ + 198, + 235, + 253, + 279 + ], + "lines": [ + { + "bbox": [ + 198, + 235, + 253, + 279 + ], + "spans": [ + { + "bbox": [ + 198, + 235, + 253, + 279 + ], + "type": "image", + "image_path": "27744617545a63b0e9081ea48a5034b506831f788138e5d5ecbbbd5303bdda21.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 253, + 236, + 306, + 278 + ], + "blocks": [ + { + "bbox": [ + 253, + 236, + 306, + 278 + ], + "lines": [ + { + "bbox": [ + 253, + 236, + 306, + 278 + ], + "spans": [ + { + "bbox": [ + 253, + 236, + 306, + 278 + ], + "type": "image", + "image_path": "c6ec8197a39ff9c9034c6c3ac15898c7aebbbcfd5cab7262a1c76cf46c44e041.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 310, + 234, + 362, + 276 + ], + "blocks": [ + { + "bbox": [ + 310, + 234, + 362, + 276 + ], + "lines": [ + { + "bbox": [ + 310, + 234, + 362, + 276 + ], + "spans": [ + { + "bbox": [ + 310, + 234, + 362, + 276 + ], + "type": "image", + "image_path": "b5f7ca3cdd94a16ad1d9bd92d76dd19e8baf495848a9d399e6b2b7d0934d137a.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 364, + 237, + 421, + 277 + ], + "blocks": [ + { + "bbox": [ + 364, + 237, + 421, + 277 + ], + "lines": [ + { + "bbox": [ + 364, + 237, + 421, + 277 + ], + "spans": [ + { + "bbox": [ + 364, + 237, + 421, + 277 + ], + "type": "image", + "image_path": "b8e9ef344ae3431f3ca52200e140a0872b2d61ad27673b321a0d0255be896c79.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 424, + 236, + 477, + 277 + ], + "blocks": [ + { + "bbox": [ + 424, + 236, + 477, + 277 + ], + "lines": [ + { + "bbox": [ + 424, + 236, + 477, + 277 + ], + "spans": [ + { + "bbox": [ + 424, + 236, + 477, + 277 + ], + "type": "image", + "image_path": "1e772d8508331de254e6ab1bcea06516f9f1a9e016958fd814dc1e11af54f086.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 133, + 281, + 187, + 323 + ], + "blocks": [ + { + "bbox": [ + 133, + 281, + 187, + 323 + ], + "lines": [ + { + "bbox": [ + 133, + 281, + 187, + 323 + ], + "spans": [ + { + "bbox": [ + 133, + 281, + 187, + 323 + ], + "type": "image", + "image_path": "cd8ec304519fbb25a27eabfdf46d78a28be15dacfdb2c29a76e895bd927ca750.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 139, + 323, + 181, + 332 + ], + "lines": [ + { + "bbox": [ + 139, + 323, + 181, + 332 + ], + "spans": [ + { + "bbox": [ + 139, + 323, + 181, + 332 + ], + "type": "text", + "content": "RGB-D Image" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 132, + 346, + 482, + 401 + ], + "lines": [ + { + "bbox": [ + 132, + 346, + 482, + 401 + ], + "spans": [ + { + "bbox": [ + 132, + 346, + 482, + 401 + ], + "type": "text", + "content": "Fig. 1: Given an RGB-D image and the foreground mask of multiple objects not seen during training, our method predicts their complete 3D shapes quickly and accurately, including occluded areas. (Left) Synthetic image results. (Right) Zero-shot generalization to a real-world image of household objects with noisy depth data. Our 3D results are rotated with respect to the input to highlight completions in occluded regions." + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 189, + 287, + 253, + 322 + ], + "blocks": [ + { + "bbox": [ + 189, + 287, + 253, + 322 + ], + "lines": [ + { + "bbox": [ + 189, + 287, + 253, + 322 + ], + "spans": [ + { + "bbox": [ + 189, + 287, + 253, + 322 + ], + "type": "image", + "image_path": "53ebc4921a5773cb85a71908e98f9b4b11808f0b94329d211b00b476febebabe.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 253, + 286, + 306, + 323 + ], + "blocks": [ + { + "bbox": [ + 253, + 286, + 306, + 323 + ], + "lines": [ + { + "bbox": [ + 253, + 286, + 306, + 323 + ], + "spans": [ + { + "bbox": [ + 253, + 286, + 306, + 323 + ], + "type": "image", + "image_path": "8db3e66a586be6def3c098d4d27244b7af8ea77e82d963a1ce7a617aae02824f.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 310, + 281, + 362, + 323 + ], + "blocks": [ + { + "bbox": [ + 310, + 281, + 362, + 323 + ], + "lines": [ + { + "bbox": [ + 310, + 281, + 362, + 323 + ], + "spans": [ + { + "bbox": [ + 310, + 281, + 362, + 323 + ], + "type": "image", + "image_path": "90e5f82e917c60fb23766ccfe9f170c45a30748dbc62fb797b09d94bec19c1f8.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 364, + 289, + 370, + 313 + ], + "lines": [ + { + "bbox": [ + 364, + 289, + 370, + 313 + ], + "spans": [ + { + "bbox": [ + 364, + 289, + 370, + 313 + ], + "type": "text", + "content": "Bae" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 375, + 280, + 415, + 323 + ], + "blocks": [ + { + "bbox": [ + 375, + 280, + 415, + 323 + ], + "lines": [ + { + "bbox": [ + 375, + 280, + 415, + 323 + ], + "spans": [ + { + "bbox": [ + 375, + 280, + 415, + 323 + ], + "type": "image", + "image_path": "7da9167f0d2c15e995ac8f5dc7b1e35cfbc96f18458831f2f64a9768a3608345.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 367, + 324, + 425, + 332 + ], + "lines": [ + { + "bbox": [ + 367, + 324, + 425, + 332 + ], + "spans": [ + { + "bbox": [ + 367, + 324, + 425, + 332 + ], + "type": "text", + "content": "Completed 3D Shape" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 429, + 279, + 470, + 323 + ], + "blocks": [ + { + "bbox": [ + 429, + 279, + 470, + 323 + ], + "lines": [ + { + "bbox": [ + 429, + 279, + 470, + 323 + ], + "spans": [ + { + "bbox": [ + 429, + 279, + 470, + 323 + ], + "type": "image", + "image_path": "7bf33584df2eb6171c35fc5acc6a5c34a3690bdb079222da6ffd289860951336.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 435, + 323, + 474, + 332 + ], + "lines": [ + { + "bbox": [ + 435, + 323, + 474, + 332 + ], + "spans": [ + { + "bbox": [ + 435, + 323, + 474, + 332 + ], + "type": "text", + "content": "Ground-Truth" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "bbox": [ + 160, + 424, + 453, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 424, + 453, + 622 + ], + "spans": [ + { + "bbox": [ + 160, + 424, + 453, + 622 + ], + "type": "text", + "content": "Abstract. We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image. Despite notable advancements in single-object 3D shape completion, high-quality reconstructions in highly cluttered real-world multi-object scenes remains a challenge. To address this issue, we propose OctMAE, an architecture that leverages an Octree U-Net and a latent 3D MAE to achieve high-quality and near real-time multi-object scene completion through both local and global geometric reasoning. Because a naive 3D MAE can be computationally intractable and memory intensive even in the latent space, we introduce a novel occlusion masking strategy and adopt 3D rotary embeddings, which significantly improve the runtime and scene completion quality. To generalize to a wide range of objects in diverse scenes, we create a large-scale photorealistic dataset, featuring a diverse set of 12K 3D object models from the Objaverse dataset that are rendered in multi-object scenes with physics-based positioning. Our method outperforms the current state-of-the-art on both synthetic and real-world datasets and demonstrates a strong zero-shot capability. https://sh8.io/#/oct_mae" + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 135, + 654, + 206, + 665 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 135, + 654, + 206, + 665 + ], + "spans": [ + { + "bbox": [ + 135, + 654, + 206, + 665 + ], + "type": "text", + "content": "* Equal advising." + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 138, + 117, + 479, + 196 + ], + "blocks": [ + { + "bbox": [ + 138, + 117, + 479, + 196 + ], + "lines": [ + { + "bbox": [ + 138, + 117, + 479, + 196 + ], + "spans": [ + { + "bbox": [ + 138, + 117, + 479, + 196 + ], + "type": "image", + "image_path": "83a8d659290df93065f3d0a08b3edc2f16723f2b5fb98b0b9732e1bd20667dbb.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "lines": [ + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "spans": [ + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": "Fig. 2: Overview of our proposed method (OctMAE). Given an input RGB Image " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{I}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": ", depth map " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{D}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": ", and a foreground mask " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{M}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": ", the octree feature " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{F}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": " is obtained by unprojecting an image feature encoded by a pre-trained image encoder " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{E}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": ". The octree feature is then encoded by the Octree encoder and downsampled to the Level of Detail (LoD) of 5. The notation LoD-" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": " indicates that each axis of the voxel grid has resolution of " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "2^h" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": ". The latent 3D MAE takes the encoded Octree feature " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{F}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": " as input and its output feature is concatenated with the occlusion mask tokens " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{T}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": ". Next, the masked decoded feature " + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{ML}" + }, + { + "bbox": [ + 130, + 204, + 482, + 304 + ], + "type": "text", + "content": " is computed by sparse 3D MAE decoder. Finally, the Octree decoder predicts a completed surface at LoD-9." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 327, + 230, + 339 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 327, + 230, + 339 + ], + "spans": [ + { + "bbox": [ + 132, + 327, + 230, + 339 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 354, + 481, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 354, + 481, + 413 + ], + "spans": [ + { + "bbox": [ + 130, + 354, + 481, + 413 + ], + "type": "text", + "content": "Humans can instantly imagine complete shapes of multiple novel objects in a cluttered scene via advanced geometric and semantic reasoning. This ability is also essential for robots if they are to effectively perform useful tasks in the real world [26, 27, 46, 60]. In this work, we propose a method that can quickly and accurately complete a wide number of objects in diverse real-world scenes." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 414, + 482, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 414, + 482, + 594 + ], + "spans": [ + { + "bbox": [ + 130, + 414, + 482, + 594 + ], + "type": "text", + "content": "Prior works [31, 34, 36, 43, 47, 71] have achieved phenomenal progress in scene and object shape completion from a single RGB-D image. Object-centric methods [17, 25] in particular can achieve very high reconstruction accuracy by relying on category-specific shape priors. However, when deployed on entire scenes such methods require bespoke instance detection/segmentation models, and often perform test-time optimization which is time consuming and would hinder real-time deployment on a robot. Moreover, existing methods are typically limited to a small set of categories. Thus, zero-shot multi-object scene completion remains a challenging and open problem that has seen little success to date. This is in stark contrast to the sudden increase in powerful algorithms for 2D computer vision tasks such as object detection [33, 75] and image segmentation [35, 70]. We attribute this progress to a great extent to the availability of large-scale datasets [8, 54] coupled with neural architectures and learning objectives [22, 50, 53, 57] that can effectively exploit the highly structured data occurring in the natural world [20]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "content": "Taking inspiration from the latest developments in the 2D domain, we propose a scene completion algorithm at the scene level that generalizes across a large number of shapes and that only supposes an RGB-D image and foreground mask as input. Our method consists of Octree masked autoencoders (OctMAE) — a hybrid architecture of Octree U-Net and a latent 3D MAE (Figure 2). Although a recent work, VoxFormer [34], also extends MAE architecture to 3D" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "content": "using deformable 3D attention and shows great improvement in semantic scene completion tasks, its memory utilization is still prohibitive to handle a higher resolution voxel grid. We address this issue by integrating 3D MAE into the latent space of Octree U-Net. Our experiments show that the latent 3D MAE is the key to global structure understanding and leads to strong performance and generalization across all datasets. Moreover, we find that the choice of a masking strategy and 3D positional embeddings is crucial to achieve better performance. We provide extensive ablations to verify that our 3D latent MAE design is effective." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 224, + 482, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 224, + 482, + 319 + ], + "spans": [ + { + "bbox": [ + 130, + 224, + 482, + 319 + ], + "type": "text", + "content": "Our second contribution consists of the creation of a novel synthetic dataset to counteract the lack of large-scale and diverse 3D datasets. The dataset contains 12K 3D models of hand-held objects from Objaverse [12] and GSO [16] datasets (Figure 3). We utilize the dataset to conduct a comprehensive evaluation of our method as well as other baselines and show that our method scales and achieves better results. Finally, we perform zero-shot evaluations on synthetic as well as real datasets and show that a combination of 3D diversity coupled with an appropriate architecture is key to generalizable scene completion in the wild." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 146, + 319, + 362, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 319, + 362, + 331 + ], + "spans": [ + { + "bbox": [ + 146, + 319, + 362, + 331 + ], + "type": "text", + "content": "Our contributions can be summarized as follows:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 338, + 481, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 338, + 481, + 397 + ], + "spans": [ + { + "bbox": [ + 138, + 338, + 481, + 397 + ], + "type": "text", + "content": "- We present a novel network architecture, Octree Masked Autoencoders (OctMAE), a hybrid architecture of Octree U-Net and latent 3D MAE, which achieves state-of-the-art results on all the benchmarks. Further, we introduce a simple occlusion masking strategy with full attention, which boosts the performance of a latent 3D MAE." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 398, + 481, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 398, + 481, + 434 + ], + "spans": [ + { + "bbox": [ + 138, + 398, + 481, + 434 + ], + "type": "text", + "content": "- We create the first large-scale and diverse synthetic dataset using Objaverse [12] dataset for zero-shot multi-object scene completion, and provide a wide range of benchmark and analysis." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 450, + 237, + 463 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 450, + 237, + 463 + ], + "spans": [ + { + "bbox": [ + 132, + 450, + 237, + 463 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "spans": [ + { + "bbox": [ + 130, + 474, + 482, + 668 + ], + "type": "text", + "content": "3D reconstruction and completion. Reconstructing indoor scenes and objects from a noisy point cloud has been widely explored [1, 2, 4, 6, 9, 10, 23, 24, 34, 40, 42, 47, 48, 56, 65, 66]. Several works [4, 5, 43, 44, 47, 58, 60, 63, 71, 72, 74, 76] tackle more challenging shape completion tasks where large parts of a target is missing. While these methods achieve impressive results, they do not explicitly consider semantic information, which may limit their capability for accurate shape completion. Recent methods [31, 32, 34, 76] in Semantic Scene Completion (SSC) leverage semantic information via an RGB image. Nevertheless, the number of target categories is quite limited, restricting its utility for a broad range of applications in the real world. In addition, many methods adopt occupancy or SDF as an output representation, which necessitates post-processing such as the marching cubes [41] and sphere tracing to extract an explicit surface. As another direction, GeNVS [3], Zero-1-to-3 [39], and 3DiM [64] explore single-view 3D reconstruction via novel view synthesis. However, expensive test-time optimization is required. Recently, One-2-3-45 [38] and MCC [66] attempt to improve the generation speed, however, their runtime for multi-object scenes is still far from near" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 116, + 481, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 481, + 186 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 481, + 186 + ], + "type": "text", + "content": "real-time. Further, since these methods are object-centric, multiple objects in a single scene are not handled well due to the complicated geometric reasoning especially caused by occlusions by other objects. In this paper, we propose a general and near real-time framework for multi-object 3D scene completion in the wild using only an RGB-D image and foreground mask without expensive test-time optimization." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 201, + 481, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 201, + 481, + 415 + ], + "spans": [ + { + "bbox": [ + 133, + 201, + 481, + 415 + ], + "type": "text", + "content": "Implicit 3D representations. Recently, various types of implicit 3D representation have become popular in 3D reconstruction and completion tasks. Early works [18,42,47] use a one-dimensional latent feature to represent a 3D shape as occupancy and SDF fields. Several works [31,48,58] employ voxels, groundplanes, and triplanes, demonstrating that the retention of geometric information using 3D CNNs enhances performance. Although the voxel representation typically performs well among these three, its cubic memory and computational costs make increasing resolution challenging. To mitigate this issue, sparse voxels [6,21,37,55,62] treat a 3D representation as a sparse set of structured points using the octree and hash table and perform convolutions only on non-empty voxels and its neighbors. Further, the high-resolution sparse voxel enables a direct prediction of a target surface. As another direction, [1,67,77] leverage point cloud. Nonetheless, an unstructured set of points can be non-uniformly distributed in the 3D space and requires running the k-NN algorithm at every operation. This aspect often renders point-based methods less appealing compared to the sparse voxel representation. Therefore, our method adopts an octree-based representation used in [62] for efficient training and direct surface prediction." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 428, + 481, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 428, + 481, + 559 + ], + "spans": [ + { + "bbox": [ + 133, + 428, + 481, + 559 + ], + "type": "text", + "content": "Masked Autoencoders (MAE). Inspired by the success of ViTs [15, 73] and masked language modeling [14, 51], [22] demonstrates that masked autoencoders (MAE) with ViTs can learn powerful image representation by reconstructing masked images. To improve the efficiency and performance of MAE, ConvMAE [19] proposes a hybrid approach that performs masked autoencoding at the latent space of 2D CNN-based autoencoder network. Recently, VoxFormer [34] extends the MAE design to 3D for semantic scene completion using 3D deformable attention, and shows great improvement over previous works. However, it is not trivial to scale up the MAE architecture to a higher resolution voxel due to memory constraints. Motivated by ConvMAE [19] and OCNN [62], we propose an efficient OctMAE architecture using sparse 3D operations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 579, + 260, + 593 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 579, + 260, + 593 + ], + "spans": [ + { + "bbox": [ + 133, + 579, + 260, + 593 + ], + "type": "text", + "content": "3 Proposed Method" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": "Given an RGB image " + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{I} \\in \\mathbb{R}^{H \\times W \\times 3}" + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": ", depth map " + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{D} \\in \\mathbb{R}^{H \\times W}" + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": ", and foreground mask " + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{M} \\in \\mathbb{R}^{H \\times W}" + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": " containing all objects of interest, we aim to predict their complete 3D shapes quickly and accurately. Our framework first encodes an RGB image " + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{I}" + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": " with a pre-trained image encoder " + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": " such as ResNeXt [69] and then lifts the resulting features up to 3D space using a depth map " + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{D}" + }, + { + "bbox": [ + 133, + 604, + 481, + 665 + ], + "type": "text", + "content": " and foreground mask" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "spans": [ + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "inline_equation", + "content": "\\mathbf{M}" + }, + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "text", + "content": " to acquire 3D point cloud features " + }, + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "inline_equation", + "content": "\\mathbf{F} \\in \\mathbb{R}^{N \\times D}" + }, + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "text", + "content": " and its locations " + }, + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "inline_equation", + "content": "\\mathbf{P} \\in \\mathbb{R}^{N \\times 3}" + }, + { + "bbox": [ + 132, + 115, + 481, + 163 + ], + "type": "text", + "content": " (Section 3.1). Second, we convert the 3D features into an octree using the same algorithm used in [63] and pass it to OctMAE to predict a surface at each LoD (Section 3.2). The diagram of our method is visualized in Figure 2." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 180, + 301, + 192 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 180, + 301, + 192 + ], + "spans": [ + { + "bbox": [ + 132, + 180, + 301, + 192 + ], + "type": "text", + "content": "3.1 Octree Feature Aggregation" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "spans": [ + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": "We adopt ResNeXt-50 [69] as an image encoder to obtain dense and robust image features " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{W} = E(\\mathbf{I}) \\in \\mathbb{R}^{H \\times W \\times D}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " from an RGB image. The image features are unprojected into the 3D space using a depth image with " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "(\\mathbf{F}, \\mathbf{P}) = \\pi^{-1}(\\mathbf{W}, \\mathbf{D}, \\mathbf{M}, \\mathbf{K})" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " where a point cloud feature and its corresponding coordinates are represented as " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{F}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{P}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\pi^{-1}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " unprojects the image features " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{W}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " to the camera coordinate system using a depth map " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{D}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": ", foreground mask " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{M}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": ", and an intrinsic matrix " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "\\mathbf{K}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": ". Next, we define an octree at the level of detail (LoD) of 9 " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "(512^3)" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " with the grid and cell size being " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "1.28\\mathrm{m}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "2.5\\mathrm{mm}" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " respectively, and use the point features to populate the voxel grid, averaging features when multiple points fall into the same voxel. Here, LoD-" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " simply represents resolution of an octree. For instance, the voxel grid of LoD-9 has the maximum dimension of " + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "inline_equation", + "content": "2^9 = 512" + }, + { + "bbox": [ + 132, + 200, + 482, + 390 + ], + "type": "text", + "content": " for each axis. An octree is represented as a set of 8 octants with features at non-empty regions; therefore, it is more memory-efficient than a dense voxel grid. The octree is centered around the z-axis in the camera coordinate system, and its front plane is aligned with the nearest point to the camera along with the z-axis." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 408, + 362, + 419 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 408, + 362, + 419 + ], + "spans": [ + { + "bbox": [ + 132, + 408, + 362, + 419 + ], + "type": "text", + "content": "3.2 OctMAE: Octree Masked Autoencoders" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 426, + 482, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 426, + 482, + 594 + ], + "spans": [ + { + "bbox": [ + 132, + 426, + 482, + 594 + ], + "type": "text", + "content": "We design OctMAE which leverages Octree U-Net [62] and latent 3D MAE to achieve accurate and efficient zero-shot multi-object scene completion. Octree U-Net consists of multiple sparse 3D convolutional layers. While the Octree U-Net architecture can efficiently encode octree features to low resolution, only local regions are considered at each operation. On the contrary, 3D MAE can capture global object information which helps predict globally consistent 3D shapes. However, unlike an image, a dense voxel grid contains a prohibitive number of tokens even in the latent space, which makes it challenging to adopt an MAE architecture directly for 3D tasks. Recently, ConvMAE [19] proposed to leverage the advantages of both CNNs and MAE in 2D for efficient training. Nevertheless, a naïve extension of ConvMAE [19] to 3D also leads to prohibitive computational and memory costs. To address this issue, we propose a novel occlusion masking strategy and adopt 3D rotary embeddings, enabling efficient masked autoencoding in the latent space." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "text", + "content": "Encoder architecture. The encoder of Octree U-Net [63] takes the octree feature at LoD-9 and computes a latent octree feature " + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_L\\in \\mathbb{R}^{N'\\times D'}" + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "text", + "content": " at LoD-5 where " + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "inline_equation", + "content": "N^{\\prime}" + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "text", + "content": " is the number of non-empty voxels and " + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "inline_equation", + "content": "D^{\\prime}" + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "text", + "content": " is the latent feature dimension. To incorporate global symmetric and object scale information which gives more cues about completed shapes, we use " + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 132, + 605, + 481, + 665 + ], + "type": "text", + "content": " layers of the full self-attention" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "content": "Transformer blocks in the latent 3D MAE encoder. Since " + }, + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "inline_equation", + "content": "N'" + }, + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "content": " is typically the order of the hundreds to thousands, we resort to memory-efficient attention algorithms [11, 49]. Ideally, learnable relative positional encodings [77] are used to deal with the different alignments of point cloud features inside an octree. However, it requires computing the one-to-one relative positional encoding " + }, + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "inline_equation", + "content": "N' \\times N'" + }, + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "content": " times, which largely slows down the training and makes it computationally impractical. Therefore, we use RoPE [59] to encode 3D axial information between voxels. Concretely, we embed position information with RoPE at every multi-head attention layer as" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 182, + 234, + 481, + 249 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 234, + 481, + 249 + ], + "spans": [ + { + "bbox": [ + 182, + 234, + 481, + 249 + ], + "type": "interline_equation", + "content": "\\mathbf {R} _ {i} = \\operatorname {d i a g} \\left(R (p _ {i} ^ {x}), R (p _ {i} ^ {y}), R (p _ {i} ^ {z}), \\mathbf {I}\\right) \\in \\mathbb {R} ^ {D ^ {\\prime} \\times D ^ {\\prime}}, \\quad \\mathbf {f} _ {i} ^ {\\prime} = \\mathbf {R} _ {i} \\mathbf {f} _ {i}, \\tag {1}", + "image_path": "d4656a0ed39b9f6328b412b4e7ee3d876f9c48ce85c75faf40d0cc0e7de5c2b1.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "spans": [ + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "inline_equation", + "content": "\\mathbf{f}_i\\in \\mathbb{R}^{D'}" + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "inline_equation", + "content": "\\mathbf{p}_i\\in \\mathbb{R}^3" + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "content": "-th octree feature and coordinates. " + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "inline_equation", + "content": "R:\\mathbb{R}\\to \\mathbb{R}^{\\left[D' / 3\\right]\\times \\left[D' / 3\\right]}" + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "content": " is a function to generate a rotation matrix given normalized 1D axial coordinate. The detailed derivation of " + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "inline_equation", + "content": "\\mathbf{R}" + }, + { + "bbox": [ + 130, + 251, + 481, + 292 + ], + "type": "text", + "content": " can be found in the supplemental." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "spans": [ + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "text", + "content": "Occlusion masking. Next, we concatenate mask tokens " + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "inline_equation", + "content": "\\mathbf{T} \\in \\mathbb{R}^{M \\times D'}" + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "text", + "content": " to the encoded latent octree feature where " + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "text", + "content": " is the number of the mask tokens. Note that each of the mask tokens has identical learnable parameters. The key question is how to place them in 3D space. Although previous methods [34] put mask tokens inside all the empty cells of a dense voxel grid, it is unlikely that visible regions extending from the camera to the input depth are occupied unless the error of a depth map is enormous. Further, this dense masking strategy forces us to use a local attention mechanism such as deformable 3D attention used in VoxFormer [34], due to the highly expensive memory and computational cost. To address this issue, we introduce an occlusion masking strategy in which the mask tokens " + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "inline_equation", + "content": "\\mathbf{T}" + }, + { + "bbox": [ + 130, + 299, + 482, + 491 + ], + "type": "text", + "content": " are placed only into occluded voxels. Concretely, we perform depth testing on every voxel within a voxel grid to determine if they are positioned behind objects. Mask tokens are assigned to their respective locations only after passing this test. The proposed occlusion masking strategy and efficient positional encoding enable our latent 3D MAE (Figure 4) to leverage full attention instead of local attention." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "spans": [ + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "text", + "content": "Decoder architecture. The masked octree feature is given to the latent 3D MAE decoder which consists of " + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "text", + "content": " layers of the full cross-attention Transformer blocks with RoPE [59] to learn global reasoning including occluded regions. Finally, the decoder of Octree U-Net takes the mixed latent octree feature of the Transformer decoder " + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "inline_equation", + "content": "\\mathbf{F}_{ML} \\in \\mathbb{R}^{(N' + M) \\times D'}" + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "text", + "content": " as input and upsamples features with skip connections. The decoded feature is passed to a two-layer MLP which estimates an occupancy at LoD-" + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 130, + 499, + 482, + 612 + ], + "type": "text", + "content": ". In addition, normals and SDF values are predicted only at the final LoD. To avoid unnecessary computation, we prune grid cells predicted as empty with a threshold of 0.5 at every LoD, following [63]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 131, + 624, + 342, + 636 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 624, + 342, + 636 + ], + "spans": [ + { + "bbox": [ + 131, + 624, + 342, + 636 + ], + "type": "text", + "content": "3.3 Training Details and Loss Functions" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "content": "We use all surface points extracted through OpenVDB [45] during training. The loss function is defined as" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 101 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 118, + 202, + 170 + ], + "blocks": [ + { + "bbox": [ + 134, + 118, + 202, + 170 + ], + "lines": [ + { + "bbox": [ + 134, + 118, + 202, + 170 + ], + "spans": [ + { + "bbox": [ + 134, + 118, + 202, + 170 + ], + "type": "image", + "image_path": "254af92f7fe9ab95a825ffe3eb45f3b6340a6ccf883620be06f5bcf4aa03be21.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 134, + 170, + 202, + 221 + ], + "blocks": [ + { + "bbox": [ + 134, + 170, + 202, + 221 + ], + "lines": [ + { + "bbox": [ + 134, + 170, + 202, + 221 + ], + "spans": [ + { + "bbox": [ + 134, + 170, + 202, + 221 + ], + "type": "image", + "image_path": "f1c799d803e65d3317e1324084f35b6c34721ff2fce61c499e62e183cb85b7ee.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 230, + 342, + 262 + ], + "lines": [ + { + "bbox": [ + 132, + 230, + 342, + 262 + ], + "spans": [ + { + "bbox": [ + 132, + 230, + 342, + 262 + ], + "type": "text", + "content": "Fig. 3: Example images of our synthetic dataset. We use BlenderProc [13] to acquire high-quality images under various and realistic illumination conditions." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 203, + 118, + 271, + 170 + ], + "blocks": [ + { + "bbox": [ + 203, + 118, + 271, + 170 + ], + "lines": [ + { + "bbox": [ + 203, + 118, + 271, + 170 + ], + "spans": [ + { + "bbox": [ + 203, + 118, + 271, + 170 + ], + "type": "image", + "image_path": "5b52f458e72f4c1db3b0873d06af7516389fcc807343932638f2300d8ef5194a.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 203, + 170, + 271, + 221 + ], + "blocks": [ + { + "bbox": [ + 203, + 170, + 271, + 221 + ], + "lines": [ + { + "bbox": [ + 203, + 170, + 271, + 221 + ], + "spans": [ + { + "bbox": [ + 203, + 170, + 271, + 221 + ], + "type": "image", + "image_path": "5d896ae26d44b8cf5da77e0d79f588c56a38f3659384b04219491c65b2024994.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 272, + 118, + 342, + 170 + ], + "blocks": [ + { + "bbox": [ + 272, + 118, + 342, + 170 + ], + "lines": [ + { + "bbox": [ + 272, + 118, + 342, + 170 + ], + "spans": [ + { + "bbox": [ + 272, + 118, + 342, + 170 + ], + "type": "image", + "image_path": "e558342093bcec274aa64a6636e511ec08d0df4361c422088eb1976edc65f090.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 272, + 170, + 341, + 221 + ], + "blocks": [ + { + "bbox": [ + 272, + 170, + 341, + 221 + ], + "lines": [ + { + "bbox": [ + 272, + 170, + 341, + 221 + ], + "spans": [ + { + "bbox": [ + 272, + 170, + 341, + 221 + ], + "type": "image", + "image_path": "564af605784a04fef25391624c35a2e6c9c1b5c5e50f40bee122a3548ba40320.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 351, + 245, + 477, + 267 + ], + "lines": [ + { + "bbox": [ + 351, + 245, + 477, + 267 + ], + "spans": [ + { + "bbox": [ + 351, + 245, + 477, + 267 + ], + "type": "text", + "content": "Fig.4: Overall architecture of Latent 3D MAE." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 356, + 116, + 473, + 234 + ], + "blocks": [ + { + "bbox": [ + 356, + 116, + 473, + 234 + ], + "lines": [ + { + "bbox": [ + 356, + 116, + 473, + 234 + ], + "spans": [ + { + "bbox": [ + 356, + 116, + 473, + 234 + ], + "type": "image", + "image_path": "fa58f1958eafc4f6616d405d511fb81f1b2cfe13ed3cbe664681f7a35857559a.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 153, + 346, + 461, + 429 + ], + "blocks": [ + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "lines": [ + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "spans": [ + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "type": "text", + "content": "Table 1: Dataset comparisons. We create the first large-scale and diverse 3D scene completion dataset for novel multiple objects using a subset of 3D models from Objverse dataset [12]. The number of categories is reported by using the LVIS categories, and " + }, + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "type": "inline_equation", + "content": "R^{\\mathrm{LVIS}}(\\%)" + }, + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "type": "text", + "content": " represents a ratio of the number of the categories covered by the dataset. " + }, + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "type": "inline_equation", + "content": "\\dagger" + }, + { + "bbox": [ + 130, + 281, + 482, + 336 + ], + "type": "text", + "content": " denotes the number of objects with actual size." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 153, + 346, + 461, + 429 + ], + "lines": [ + { + "bbox": [ + 153, + 346, + 461, + 429 + ], + "spans": [ + { + "bbox": [ + 153, + 346, + 461, + 429 + ], + "type": "table", + "html": "
DatasetType3D \nModels# \nFrames# \nObjs# \nCatsR^LVIS(%)
YCB-V [68]Real133K2150.4
HB [28]Real17K33131.0
HOPE [36]Real2K2830.3
CO3D V2 [52]Real6M40K504.2
MegaPose [30]Synthetic1M1K†170.9
OursSynthetic1M12K60150.0
", + "image_path": "197d974c5037848ab4a87c0c874da7b9f5e35a060286f54322466b4af72ed71f.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_body" + } + ], + "index": 12 + }, + { + "bbox": [ + 222, + 463, + 480, + 491 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 222, + 463, + 480, + 491 + ], + "spans": [ + { + "bbox": [ + 222, + 463, + 480, + 491 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\mathcal {L} _ {n r m} + \\mathcal {L} _ {S D F} + \\sum_ {h \\in \\{5, 6, 7, 8, 9 \\}} \\mathcal {L} _ {o c c} ^ {h}, \\tag {2}", + "image_path": "1011ea2e8854cd5603e9795c44f91d5d1868f3da66d168e7797b42ceb3aa3a45.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "spans": [ + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{nrm}" + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{SDF}" + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "text", + "content": " measure the averaged L2 norm of normals and SDF values. " + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{occ}^{h}" + }, + { + "bbox": [ + 131, + 496, + 480, + 521 + ], + "type": "text", + "content": " computes a mean of binary cross entropy function of each LoD-h." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 541, + 201, + 554 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 541, + 201, + 554 + ], + "spans": [ + { + "bbox": [ + 132, + 541, + 201, + 554 + ], + "type": "text", + "content": "4 Dataset" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 130, + 569, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 569, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 569, + 482, + 666 + ], + "type": "text", + "content": "As shown in Table 1, existing datasets are limited in the diversity of object categories. Although the CO3D V2 dataset [52] contains data for " + }, + { + "bbox": [ + 130, + 569, + 482, + 666 + ], + "type": "inline_equation", + "content": "40\\mathrm{k}" + }, + { + "bbox": [ + 130, + 569, + 482, + 666 + ], + "type": "text", + "content": " objects, because the provided ground-truth 3D shapes are reconstructed from unposed multi-view images, they tend to be highly noisy and parts of the object missing due to lack of visibility. To tackle this problem, we leverage Objaverse [12], a large-scale 1M 3D object dataset containing 46k objects with LVIS category annotations. To focus on completion of hand-held objects, we select 601 categories and ensure that the largest dimension of the objects in each category" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "spans": [ + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "text", + "content": "falls approximately within the range of " + }, + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "inline_equation", + "content": "4\\mathrm{cm}" + }, + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "inline_equation", + "content": "40~\\mathrm{cm}" + }, + { + "bbox": [ + 134, + 116, + 481, + 222 + ], + "type": "text", + "content": ". In addition, for high-quality rendering, we omit objects that lack textures, contain more than 10,000 vertices, or are articulated. To increase the number of objects, we add objects from Google Scanned Objects (GSO) [16], which results in 12,655 objects in total. We render 1M images of 25,000 scenes using physics-based rendering and positioning via BlenderProc [13] to simulate realistic scenes (Figure 3). For each image, we randomly choose a camera view such that at least one object is within the camera frame. We also generate 1,000 images using 250 withheld objects for evaluation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 134, + 240, + 281, + 254 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 240, + 281, + 254 + ], + "spans": [ + { + "bbox": [ + 134, + 240, + 281, + 254 + ], + "type": "text", + "content": "5 Experimental Results" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "spans": [ + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "text", + "content": "Implementation details. We train all the models for 2 epochs using the Adam [29] optimizer with a learning rate of 0.002 and batch size of 16 on NVIDIA A100. Note that the models are only trained on the synthetic dataset introduced in Section 4. In addition, the number of Transformer blocks " + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "text", + "content": ", the feature dimension " + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "inline_equation", + "content": "D'" + }, + { + "bbox": [ + 134, + 263, + 481, + 394 + ], + "type": "text", + "content": " are set to 3, 32, and 192 respectively. We use a pretrained model of ResNeXt-50 [69] as an image encoder for all the experiments. The ground-truth occupancy, SDF and normals are computed from meshes with OpenVDB [45]. During training, we dilate ground-truth masks using the radius randomly selected from 1, 3 and 5 pixels to deal with the segmentation error around the object edges. During evaluation, we use ground-truth masks provided by the datasets." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 134, + 404, + 481, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 404, + 481, + 487 + ], + "spans": [ + { + "bbox": [ + 134, + 404, + 481, + 487 + ], + "type": "text", + "content": "Evaluation metrics. We report Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) to evaluate the quality of a completed surface. For surface-based methods, we use a predicted surface directly for evaluation. For the methods that predict occupancy, the marching cubes algorithm [41] is used to extract a surface and uniformly sample 100,000 points from its surface such that the number of points are roughly equal to the surface prediction methods. We use mm as a unit for all the reported metrics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 134, + 498, + 481, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 498, + 481, + 664 + ], + "spans": [ + { + "bbox": [ + 134, + 498, + 481, + 664 + ], + "type": "text", + "content": "Evaluation datasets. We evaluate the baselines and our model on one synthetic and three real-world datasets. For the synthetic dataset, we render 1,000 images using textured 3D scans from Objaverse [12], following the same procedure described in Section 4. We randomly choose 3 to 5 objects per image from the withheld objects for Objavese dataset. Since these 3D scans are relatively more complex than the objects seen in the real-world datasets we use, they can provide a good scene completion quality estimate for complex objects. For the real-world dataset, we use the YCB-Video [68], HOPE [36] and HomebrewedDB (HB) [28] datasets. YCB-Video consists of 21 everyday objects with diverse shapes. HOPE contains 28 simple household objects with mostly rectangular and cylindrical everyday shapes, and the images are captured in various lighting conditions in indoor scenes using a RealSense D415 RGBD camera. HB includes 33 objects (e.g., toy, household, and industrial objects). Their images are taken by PrimeSense Carmine in lab-like environments." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 169, + 477, + 281 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 480, + 158 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 480, + 158 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 480, + 158 + ], + "type": "text", + "content": "Table 2: Quantitative evaluation of multi-object scene completion on Ours, YCB-Video [68], HOPE [36], and HomebrewedDB [28] datasets. Chamfer distance (CD), F1-Score@10mm (F1), and normal consistency (NC) are reported. Chamfer distance is reported in the unit of mm." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 169, + 477, + 281 + ], + "lines": [ + { + "bbox": [ + 133, + 169, + 477, + 281 + ], + "spans": [ + { + "bbox": [ + 133, + 169, + 477, + 281 + ], + "type": "table", + "html": "
Method3D Rep.SyntheticReal
OursYCB-Video [68]HB [28]HOPE [36]
CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑NC↑CD↓F1↑
VoxFormer [34]Dense44.540.3820.65330.320.4380.64134.840.3660.60847.750.323
ShapeFormer [71]Dense39.500.4010.59338.210.3850.58840.930.3280.59439.540.306
MCC [66]Implicit43.370.4590.70035.850.2890.60819.590.3710.65517.530.357
ConvONet [48]Dense23.680.5410.71032.870.4580.64926.710.5040.64320.950.581
POCO [1]Implicit21.110.6340.75315.450.5870.69913.170.6240.70913.200.602
AICNet [31]Dense15.640.5730.74112.260.5450.70211.870.5570.67411.400.564
Minkowski [6]Sparse11.470.7460.8028.040.7610.7178.810.7280.7198.560.734
OCNN [63]Sparse9.050.7820.8287.100.7780.7717.020.7920.7368.050.742
OursSparse6.480.8390.8486.400.8000.7856.140.8190.7706.970.803
", + "image_path": "252febbb465c6223c806dd2fc46f299b5270278fe2d940eee2f0f66705d79776.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 304, + 481, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 304, + 481, + 590 + ], + "spans": [ + { + "bbox": [ + 132, + 304, + 481, + 590 + ], + "type": "text", + "content": "Baselines. As discussed in Secs. 1 and 2, multi-object scene completion from a single RGB-D image is relatively not explored due to the lack of large-scale and diverse multi-object scene completion datasets. We carefully choose baseline architectures that can support this task with simple or no adaptation. We focus on three primary method types from related fields. Firstly, we select Semantic Scene Completion (SSC) methods [6,31,34,63] that do not heavily rely on domain or categorical knowledge of indoor or outdoor scenes. Secondly, we opt for object shape completion methods [6,63,66,71] that can be extended to multi-object scene completion without an architectural modification and prohibitive memory utilization. Thirdly, we consider voxel or octree-based 3D reconstruction methods [1,6,48,63] that predict a complete and plausible shape using noisy and sparse point cloud data. For dense voxel-based (e.g., AICNet [31], ConvONet [48] and VoxFormer [34]) and sparse voxel-based methods (e.g., MinkowskiNet [6], OCNN [63], and our method), we use LoD-6 and LoD-9 as an input resolution respectively. All the experiments are conducted using the original implementation provided by the authors, with few simple modifications to adapt for multi-object scene completion and a fair comparison. For instance, we extend the baselines that take the point cloud as input by concatenating the image features to the point cloud features. For occupancy-based methods, though their output voxel grid resolution is LoD-6, we use trilinear interpolation to predict occupancy at LoD-7 [48]. For MinkowskiNet [6] and OCNN [62,63], we use the U-Net architecture with the depth of 5 (LoD-9 to LoD-4). We discuss further details about the baseline architectures, their modifications, and hyperparameters in the supplemental." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 609, + 264, + 620 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 609, + 264, + 620 + ], + "spans": [ + { + "bbox": [ + 132, + 609, + 264, + 620 + ], + "type": "text", + "content": "5.1 Quantitative Results" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 629, + 480, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 629, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 629, + 480, + 665 + ], + "type": "text", + "content": "Table 2 shows that our method outperforms the baselines on all the metrics and datasets. Although our model is only trained on synthetic data, it demonstrates strong generalizability to real-world datasets. We also remark that our" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 275, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 275, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 275, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 137, + 201, + 274, + 265 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 274, + 193 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 274, + 193 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 274, + 193 + ], + "type": "text", + "content": "Table 3: Ablation Study of positional encoding on our synthetic dataset. We compare w/o positional encoding, conditional positional encoding (CPE) [7], absolute positional encoding (APE) used in [34], and RoPE [59]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 137, + 201, + 274, + 265 + ], + "lines": [ + { + "bbox": [ + 137, + 201, + 274, + 265 + ], + "spans": [ + { + "bbox": [ + 137, + 201, + 274, + 265 + ], + "type": "table", + "html": "
TypeCD↓F1↑NC↑
w/o11.320.7780.808
CPE [7]9.910.7850.811
APE [34]8.610.7820.825
RPE [61]7.810.8040.830
RoPE [59]6.480.8390.848
", + "image_path": "887145c701b281f92ee2305fb5d24e69271eeddd8ed79bbc38f3be5c3c9d2950.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 286, + 165, + 482, + 259 + ], + "blocks": [ + { + "bbox": [ + 283, + 121, + 480, + 155 + ], + "lines": [ + { + "bbox": [ + 283, + 121, + 480, + 155 + ], + "spans": [ + { + "bbox": [ + 283, + 121, + 480, + 155 + ], + "type": "text", + "content": "Table 4: Ablation study on 3D attention algorithms. The scores are reported on the HOPE dataset [36]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 286, + 165, + 482, + 259 + ], + "lines": [ + { + "bbox": [ + 286, + 165, + 482, + 259 + ], + "spans": [ + { + "bbox": [ + 286, + 165, + 482, + 259 + ], + "type": "table", + "html": "
MethodOcc. MaskingCD↓F1↑Runtime↓
3D DSA [34]12.140.70393.3
Neighbor. Attn. [77]9.260.727130.8
Octree Attn. [61]7.990.752116.4
Neighbor. Attn. [77]8.810.759111.9
Octree Attn. [61]7.540.772105.3
Full + Self Attn.7.210.78586.2
Full + Cross Attn.6.970.80385.1
", + "image_path": "426131afeeb9886244a184844b6acdb7fef1b0da8e26ce2e6e065cc40c501059.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 290, + 482, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 290, + 482, + 495 + ], + "spans": [ + { + "bbox": [ + 130, + 290, + 482, + 495 + ], + "type": "text", + "content": "method exhibits robustness to the noise characteristics present in depth data captured by typical RGB-D cameras despite being trained on noise-free depth data in simulation. The comparisons show that hierarchical structures and the latent 3D MAE are key to predicting 3D shapes of unseen objects more accurately than the baselines. Unlike our method, VoxFormer [34] uses an MAE with 3D deformable attention where only 8 neighbors of the reference points at the finest resolution are considered. Figure 8 also demonstrates that methods using a dense voxel grid or implicit representation fail to generalize to novel shapes. This implies that capturing a right choice of a network architecture is crucial to learn generalizable shape priors for zero-shot multi-object scene completion. Our method has the similar U-Net architecture used in MinkowskiNet [6] and OCNN [62] except we use the latent 3D MAE at LoD-5 instead of making the network deeper. This indicates that the latent 3D MAE can better approximate the shape distribution of the training dataset by leveraging an attention mechanism to capture global 3D contexts. Table 7 also confirms that our method achieves the best scene completion quality by measuring Chamfer distance in visible and occluded regions separately." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "content": "Positional encoding. As shown in Table 3, we explore the effect of RoPE [59] on the validation set of our synthetic dataset. The first row shows that all the metrics significantly drop if positional encoding is not used. In addition, we test CPE [7], APE [34], and RPE [61] and obtain slightly better scores. CPE [7] is typically more effective than APE in tasks such as 3D instance/semantic segmentation and object detection where a complete 3D point cloud is given. However, this result highlights the challenge of capturing position information from mask tokens which initially have the identical parameters. Our method employs RoPE [59] for relative positional embedding. One of the important aspect of RoPE [59] is that it does not have any learnable parameters. Despite this, it demonstrates superior performance compared to other approaches. Although RoPE was originally proposed in the domain of natural language processing, our experiment reveals its effectiveness in multi-object 3D scene completion." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 136, + 157, + 276, + 201 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 274, + 147 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 274, + 147 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 274, + 147 + ], + "type": "text", + "content": "Table 5: Ablation study of the number of MAE layers on our synthetic dataset." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 136, + 157, + 276, + 201 + ], + "lines": [ + { + "bbox": [ + 136, + 157, + 276, + 201 + ], + "spans": [ + { + "bbox": [ + 136, + 157, + 276, + 201 + ], + "type": "table", + "html": "
#LayersCD↓F1↑NC↑Runtime↓
19.010.7840.82876.4
36.480.8390.84885.1
55.750.8500.85596.2
", + "image_path": "29e0228df3d27f9ad1e8916a51ec60fe8c86c4ecea0e0e620f5065a74a09ed47.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 293, + 152, + 468, + 196 + ], + "blocks": [ + { + "bbox": [ + 282, + 120, + 479, + 143 + ], + "lines": [ + { + "bbox": [ + 282, + 120, + 479, + 143 + ], + "spans": [ + { + "bbox": [ + 282, + 120, + 479, + 143 + ], + "type": "text", + "content": "Table 6: Ablation study of U-Net architectures on HomebrewedDB dataset [28]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 293, + 152, + 468, + 196 + ], + "lines": [ + { + "bbox": [ + 293, + 152, + 468, + 196 + ], + "spans": [ + { + "bbox": [ + 293, + 152, + 468, + 196 + ], + "type": "table", + "html": "
ArchitectureCD↓F1↑NC↑Runtime↓
Mink. U-Net [6]7.260.7880.74383.8
OctFormer [61]7.450.7560.728114.4
Octree U-Net [62]6.140.8190.77085.1
", + "image_path": "a30d7ee977575cf7389c310d0b82d1abab46d061538fb0724b8bcaf9e3a4513a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 136, + 247, + 477, + 365 + ], + "blocks": [ + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "lines": [ + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "spans": [ + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "type": "text", + "content": "Table 7: Comparisons of the runtime (ms). For reference, we also show Chamfer distance of visible " + }, + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "type": "inline_equation", + "content": "\\mathrm{CD}_{vis}" + }, + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "type": "text", + "content": " and occluded " + }, + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "type": "inline_equation", + "content": "\\mathrm{CD}_{occ}" + }, + { + "bbox": [ + 132, + 215, + 479, + 237 + ], + "type": "text", + "content": " regions on our synthetic dataset." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 136, + 247, + 477, + 365 + ], + "lines": [ + { + "bbox": [ + 136, + 247, + 477, + 365 + ], + "spans": [ + { + "bbox": [ + 136, + 247, + 477, + 365 + ], + "type": "table", + "html": "
Method3D Rep.ResolutionCDvis↓CDocc↓CD↓Runtime↓
VoxFormer [34]Dense128318.2566.3244.5479.5
ShapeFormer [71]Dense128314.6163.3339.501.8 × 104
MCC [66]Implicit128315.3963.4144.379.1 × 103
ConvONet [48]Dense128317.0934.0923.6848.4
POCO [1]Implicit128310.3731.5521.11758.8
AICNet [31]Dense12839.9821.4315.6424.2
Minkowski [6]Sparse51237.1215.4411.4778.5
OCNN [63]Sparse51233.8712.169.0580.1
OursSparse51233.299.406.4885.1
", + "image_path": "60dae0055bd50bd5f9122d56f5d75c4038ddb261b544602d008fbfa795232636.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 390, + 479, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 390, + 479, + 450 + ], + "spans": [ + { + "bbox": [ + 132, + 390, + 479, + 450 + ], + "type": "text", + "content": "3D Attention algorithms. Table 4 reveals that occlusion masking yields better runtime and metrics than dense masking. Furthermore, our experiments suggest that full attention and Octree attention, both characterized by their wider receptive fields, are more effective compared to local attention algorithms such as 3D deformable self-attention (3D DSA) [34] and neighborhood attention [77]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 468, + 480, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 468, + 480, + 515 + ], + "spans": [ + { + "bbox": [ + 132, + 468, + 480, + 515 + ], + "type": "text", + "content": "Number of layers in 3D latent MAE. We further explore the design of 3D latent MAE in Table 5. Increasing the number of layers in 3D latent MAE improves the scene completion quality while making the runtime slower. Consequently, we select 3 layers for a good trade-off between the accuracy and runtime." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 533, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 533, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 533, + 481, + 665 + ], + "type": "text", + "content": "U-Net architectures. In Table 6, we investigate U-Net architectures. The key difference of Minkowski U-Net [6] is the use of a sparse tensor as an underlying data structure instead of an octree, which gives a slightly better performance than Octree U-Net [62]. OctFormer [61] proposes an octree-based window attention mechanism using the 3D Z-order curve to support a much larger kernel size than Octree U-Net. In general, a wider range of an effective receptive field helps achieve better performance. Nonetheless, OctFormer achieves a chamfer distance and F-1 score of 7.45 and 0.756, which is worse than Octree U-Net by 1.31 and 0.063 respectively. This indicates that the OctFormer's attention mechanism is less effective compared to an Octree U-Net architecture especially in the presence of latent 3D MAE, playing the similar role in the latent space." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 135, + 113, + 299, + 220 + ], + "blocks": [ + { + "bbox": [ + 135, + 113, + 299, + 220 + ], + "lines": [ + { + "bbox": [ + 135, + 113, + 299, + 220 + ], + "spans": [ + { + "bbox": [ + 135, + 113, + 299, + 220 + ], + "type": "image", + "image_path": "0ea8058eb04e3267fa43da6898c2601022d7752e72059663961b789bb480b805.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "lines": [ + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "spans": [ + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": "Fig.5: Scaling of the metrics with the number of objects in a training dataset. We conduct the experiments by changing the ratio of the number of objects to " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "5\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 132, + 241, + 303, + 297 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 321, + 122, + 369, + 175 + ], + "blocks": [ + { + "bbox": [ + 321, + 122, + 369, + 175 + ], + "lines": [ + { + "bbox": [ + 321, + 122, + 369, + 175 + ], + "spans": [ + { + "bbox": [ + 321, + 122, + 369, + 175 + ], + "type": "image", + "image_path": "2474a71bdc02fc05ba02541364e6fc70303c573314fefffa795613c970d1b654.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 322, + 175, + 372, + 219 + ], + "blocks": [ + { + "bbox": [ + 322, + 175, + 372, + 219 + ], + "lines": [ + { + "bbox": [ + 322, + 175, + 372, + 219 + ], + "spans": [ + { + "bbox": [ + 322, + 175, + 372, + 219 + ], + "type": "image", + "image_path": "5dc8a3b8237d0a94607e2e369779e89117c75037db80efafaf8eae870110fd99.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 328, + 224, + 362, + 232 + ], + "lines": [ + { + "bbox": [ + 328, + 224, + 362, + 232 + ], + "spans": [ + { + "bbox": [ + 328, + 224, + 362, + 232 + ], + "type": "text", + "content": "Ground-Truth" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 370, + 124, + 416, + 175 + ], + "blocks": [ + { + "bbox": [ + 370, + 124, + 416, + 175 + ], + "lines": [ + { + "bbox": [ + 370, + 124, + 416, + 175 + ], + "spans": [ + { + "bbox": [ + 370, + 124, + 416, + 175 + ], + "type": "image", + "image_path": "34f195f4418fedd1ea815c7123ea9b562466bae0fa5d8af4243f0a7b47d5751f.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 373, + 175, + 423, + 220 + ], + "blocks": [ + { + "bbox": [ + 373, + 175, + 423, + 220 + ], + "lines": [ + { + "bbox": [ + 373, + 175, + 423, + 220 + ], + "spans": [ + { + "bbox": [ + 373, + 175, + 423, + 220 + ], + "type": "image", + "image_path": "2f7e43ce1d961af861f0345d12344dca8a5858d6d94221da8fb1c74fa5252874.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 391, + 225, + 408, + 232 + ], + "lines": [ + { + "bbox": [ + 391, + 225, + 408, + 232 + ], + "spans": [ + { + "bbox": [ + 391, + 225, + 408, + 232 + ], + "type": "text", + "content": "OCNN" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 424, + 124, + 468, + 174 + ], + "blocks": [ + { + "bbox": [ + 424, + 124, + 468, + 174 + ], + "lines": [ + { + "bbox": [ + 424, + 124, + 468, + 174 + ], + "spans": [ + { + "bbox": [ + 424, + 124, + 468, + 174 + ], + "type": "image", + "image_path": "e507df4a2516c9cd1bd320da647d1ffef8ac43b5b3c53450392434553f7b50fb.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 427, + 175, + 475, + 218 + ], + "blocks": [ + { + "bbox": [ + 427, + 175, + 475, + 218 + ], + "lines": [ + { + "bbox": [ + 427, + 175, + 475, + 218 + ], + "spans": [ + { + "bbox": [ + 427, + 175, + 475, + 218 + ], + "type": "image", + "image_path": "dd06103c1c8d8a83d0c8f8614d04606ed9db886a1cab32487d72f0d5f67cd520.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 443, + 225, + 455, + 232 + ], + "lines": [ + { + "bbox": [ + 443, + 225, + 455, + 232 + ], + "spans": [ + { + "bbox": [ + 443, + 225, + 455, + 232 + ], + "type": "text", + "content": "Ours" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 310, + 244, + 482, + 289 + ], + "lines": [ + { + "bbox": [ + 310, + 244, + 482, + 289 + ], + "spans": [ + { + "bbox": [ + 310, + 244, + 482, + 289 + ], + "type": "text", + "content": "Fig.6: Qualitative comparison of OCNN [62] and our method. Our proposed latent 3D MAE helps predict globally consistent scene completion." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "bbox": [ + 131, + 319, + 482, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 319, + 482, + 488 + ], + "spans": [ + { + "bbox": [ + 131, + 319, + 482, + 488 + ], + "type": "text", + "content": "Runtime analysis. Table 7 shows the runtime performance of the baselines and our method. For a fair comparison, we run inference over the 50 samples of the HOPE dataset and report the average time. For occupancy-based methods, we predict occupancy on object surfaces and occluded regions. Due to the memory-intensive nature of MCC [1]'s Transformer architecture, we run inference multiple times with the maximum chunk size of 10,000 points. Our experiments demonstrate that implicit 3D representations used in POCO [1] and MCC [66] become slower when the voxel grid resolution is higher. Further, an autoregressive Transformer adopted in ShapeFormer [71] greatly increases the runtime. Conversely, the methods which leverage sparse voxel grids (e.g., MinkowskiNet [6], OCNN [63], and Ours) achieve much faster runtime thanks to efficient sparse 3D convolutions, and hierarchical pruning on predicted surfaces. Our method offers runtimes comparable to the fastest method, while implementing attention operations over the scene via latent 3D MAE, and achieving superior reconstruction." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "spans": [ + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": "Dataset scale analysis. To assess the importance of the large-scale 3D scene completion datasets, we train our model on splits of increasing sizes which contain " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "5\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 131, + 498, + 482, + 606 + ], + "type": "text", + "content": " of the total number of the objects in our dataset. We report metrics on the test split of our dataset. Section 5.1 shows that all the metrics have a strong correlation with respect to the number of objects. This could imply that the model benefits significantly from increased data diversity and volume, enhancing its ability to understand and complete 3D shapes. We believe that this analysis is crucial for understanding the relationship between data quantity and model performance." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 622, + 258, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 622, + 258, + 635 + ], + "spans": [ + { + "bbox": [ + 132, + 622, + 258, + 635 + ], + "type": "text", + "content": "5.2 Qualitative Results" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 131, + 641, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 641, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 131, + 641, + 481, + 665 + ], + "type": "text", + "content": "Figure 7 shows the qualitative results of our method on both of the synthetic and real-world datasets from three different views. Unlike the synthetic dataset," + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 118, + 175, + 150 + ], + "blocks": [ + { + "bbox": [ + 134, + 118, + 175, + 150 + ], + "lines": [ + { + "bbox": [ + 134, + 118, + 175, + 150 + ], + "spans": [ + { + "bbox": [ + 134, + 118, + 175, + 150 + ], + "type": "image", + "image_path": "68a83c039992abb04eb3d78f674a28b9fccb0af667d968a9aa73bdb28b91f872.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 134, + 151, + 174, + 182 + ], + "blocks": [ + { + "bbox": [ + 134, + 151, + 174, + 182 + ], + "lines": [ + { + "bbox": [ + 134, + 151, + 174, + 182 + ], + "spans": [ + { + "bbox": [ + 134, + 151, + 174, + 182 + ], + "type": "image", + "image_path": "01ce50157c218acb1301490615ef0f915e231d06642dd389ebbd82c82e0c256c.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 133, + 186, + 175, + 218 + ], + "blocks": [ + { + "bbox": [ + 133, + 186, + 175, + 218 + ], + "lines": [ + { + "bbox": [ + 133, + 186, + 175, + 218 + ], + "spans": [ + { + "bbox": [ + 133, + 186, + 175, + 218 + ], + "type": "image", + "image_path": "036481e9cd8effdea48a8e68f7cfce44b696d491db46f578024ae6a3a4d5d2f1.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 133, + 218, + 173, + 248 + ], + "blocks": [ + { + "bbox": [ + 133, + 218, + 173, + 248 + ], + "lines": [ + { + "bbox": [ + 133, + 218, + 173, + 248 + ], + "spans": [ + { + "bbox": [ + 133, + 218, + 173, + 248 + ], + "type": "image", + "image_path": "d858976c0c5a57048b024d4b7768de082890337c20df17bba6ad4ae2752a03e7.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 139, + 250, + 168, + 256 + ], + "lines": [ + { + "bbox": [ + 139, + 250, + 168, + 256 + ], + "spans": [ + { + "bbox": [ + 139, + 250, + 168, + 256 + ], + "type": "text", + "content": "RGB-D Image" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 177, + 118, + 221, + 150 + ], + "blocks": [ + { + "bbox": [ + 177, + 118, + 221, + 150 + ], + "lines": [ + { + "bbox": [ + 177, + 118, + 221, + 150 + ], + "spans": [ + { + "bbox": [ + 177, + 118, + 221, + 150 + ], + "type": "image", + "image_path": "c7200a6ea34c51c535371f63cda2879f9c517d2d04c2230d90062b23965c2403.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 177, + 151, + 221, + 182 + ], + "blocks": [ + { + "bbox": [ + 177, + 151, + 221, + 182 + ], + "lines": [ + { + "bbox": [ + 177, + 151, + 221, + 182 + ], + "spans": [ + { + "bbox": [ + 177, + 151, + 221, + 182 + ], + "type": "image", + "image_path": "ca9869829b6773ae55b535704b2635b1ebf296b0fdec90f60156f634afaddbc0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 178, + 188, + 220, + 217 + ], + "blocks": [ + { + "bbox": [ + 178, + 188, + 220, + 217 + ], + "lines": [ + { + "bbox": [ + 178, + 188, + 220, + 217 + ], + "spans": [ + { + "bbox": [ + 178, + 188, + 220, + 217 + ], + "type": "image", + "image_path": "518b9993aa8ebe75527c8f9494e8a80d0eafcd45d0fe6003bb987e57380f3e04.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 178, + 220, + 220, + 249 + ], + "blocks": [ + { + "bbox": [ + 178, + 220, + 220, + 249 + ], + "lines": [ + { + "bbox": [ + 178, + 220, + 220, + 249 + ], + "spans": [ + { + "bbox": [ + 178, + 220, + 220, + 249 + ], + "type": "image", + "image_path": "6c51f99cb97c2014c23d02232645bb2e6ffb539c68c5be0b49b497bdd331d377.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 196, + 251, + 211, + 256 + ], + "lines": [ + { + "bbox": [ + 196, + 251, + 211, + 256 + ], + "spans": [ + { + "bbox": [ + 196, + 251, + 211, + 256 + ], + "type": "text", + "content": "View 1" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 224, + 118, + 270, + 150 + ], + "blocks": [ + { + "bbox": [ + 224, + 118, + 270, + 150 + ], + "lines": [ + { + "bbox": [ + 224, + 118, + 270, + 150 + ], + "spans": [ + { + "bbox": [ + 224, + 118, + 270, + 150 + ], + "type": "image", + "image_path": "3d72b948acf1587fd75a76a692b9b25db82fd06749d399a8b7e427e1af9e7c19.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 224, + 151, + 270, + 180 + ], + "blocks": [ + { + "bbox": [ + 224, + 151, + 270, + 180 + ], + "lines": [ + { + "bbox": [ + 224, + 151, + 270, + 180 + ], + "spans": [ + { + "bbox": [ + 224, + 151, + 270, + 180 + ], + "type": "image", + "image_path": "a3b4ca23998ce6ef68a44ab08bd540b72255d22d63213855003866575149a511.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 229, + 189, + 266, + 217 + ], + "blocks": [ + { + "bbox": [ + 229, + 189, + 266, + 217 + ], + "lines": [ + { + "bbox": [ + 229, + 189, + 266, + 217 + ], + "spans": [ + { + "bbox": [ + 229, + 189, + 266, + 217 + ], + "type": "image", + "image_path": "2223b06631d7580a1c5de299cd65002bc095b84453dbf4f6e404955be2dce6d0.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 228, + 220, + 265, + 249 + ], + "blocks": [ + { + "bbox": [ + 228, + 220, + 265, + 249 + ], + "lines": [ + { + "bbox": [ + 228, + 220, + 265, + 249 + ], + "spans": [ + { + "bbox": [ + 228, + 220, + 265, + 249 + ], + "type": "image", + "image_path": "d55eee2806625fa3fbeb381cd4bb873b4824c3ae7e186fc9ffb5db988a8fff80.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 239, + 251, + 255, + 256 + ], + "lines": [ + { + "bbox": [ + 239, + 251, + 255, + 256 + ], + "spans": [ + { + "bbox": [ + 239, + 251, + 255, + 256 + ], + "type": "text", + "content": "View 2" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 272, + 120, + 311, + 150 + ], + "blocks": [ + { + "bbox": [ + 272, + 120, + 311, + 150 + ], + "lines": [ + { + "bbox": [ + 272, + 120, + 311, + 150 + ], + "spans": [ + { + "bbox": [ + 272, + 120, + 311, + 150 + ], + "type": "image", + "image_path": "51e65fde5c3cbb19d75241cfaec188b9f0d6c894f3bb7c9cd91bd36b2e84d9b4.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 272, + 151, + 311, + 180 + ], + "blocks": [ + { + "bbox": [ + 272, + 151, + 311, + 180 + ], + "lines": [ + { + "bbox": [ + 272, + 151, + 311, + 180 + ], + "spans": [ + { + "bbox": [ + 272, + 151, + 311, + 180 + ], + "type": "image", + "image_path": "03f4fdf2e8f710addeeb3cc5c4924f80f2e3f08d89f0017e6554d39dcde3990c.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 272, + 189, + 309, + 217 + ], + "blocks": [ + { + "bbox": [ + 272, + 189, + 309, + 217 + ], + "lines": [ + { + "bbox": [ + 272, + 189, + 309, + 217 + ], + "spans": [ + { + "bbox": [ + 272, + 189, + 309, + 217 + ], + "type": "image", + "image_path": "97116a6ba9463180012fefc9680cae23c72aefcb257436054407d5fb3f49e5f7.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 272, + 220, + 308, + 249 + ], + "blocks": [ + { + "bbox": [ + 272, + 220, + 308, + 249 + ], + "lines": [ + { + "bbox": [ + 272, + 220, + 308, + 249 + ], + "spans": [ + { + "bbox": [ + 272, + 220, + 308, + 249 + ], + "type": "image", + "image_path": "5e8d8d94e6bee9c77129d0820f2ef01e17968d3179342ace79692f4f3c0cdd02.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 284, + 251, + 298, + 256 + ], + "lines": [ + { + "bbox": [ + 284, + 251, + 298, + 256 + ], + "spans": [ + { + "bbox": [ + 284, + 251, + 298, + 256 + ], + "type": "text", + "content": "View 3" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 318, + 121, + 358, + 150 + ], + "blocks": [ + { + "bbox": [ + 318, + 121, + 358, + 150 + ], + "lines": [ + { + "bbox": [ + 318, + 121, + 358, + 150 + ], + "spans": [ + { + "bbox": [ + 318, + 121, + 358, + 150 + ], + "type": "image", + "image_path": "4b7f2cead40e9f68e4f060e5ff915c70b00a747aec0c0927cb960e492057ace5.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 362, + 129, + 367, + 137 + ], + "lines": [ + { + "bbox": [ + 362, + 129, + 367, + 137 + ], + "spans": [ + { + "bbox": [ + 362, + 129, + 367, + 137 + ], + "type": "text", + "content": "#" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 318, + 151, + 358, + 182 + ], + "blocks": [ + { + "bbox": [ + 318, + 151, + 358, + 182 + ], + "lines": [ + { + "bbox": [ + 318, + 151, + 358, + 182 + ], + "spans": [ + { + "bbox": [ + 318, + 151, + 358, + 182 + ], + "type": "image", + "image_path": "b692ea033b9a7e91ee6399906c496aadf61fd1eb036753aef081ffcb48f04493.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 318, + 188, + 358, + 217 + ], + "blocks": [ + { + "bbox": [ + 318, + 188, + 358, + 217 + ], + "lines": [ + { + "bbox": [ + 318, + 188, + 358, + 217 + ], + "spans": [ + { + "bbox": [ + 318, + 188, + 358, + 217 + ], + "type": "image", + "image_path": "aa803c1b8322aa9397dc510957594aea7516abd3055ba4f28eed97fb8089efb6.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 318, + 220, + 357, + 249 + ], + "blocks": [ + { + "bbox": [ + 318, + 220, + 357, + 249 + ], + "lines": [ + { + "bbox": [ + 318, + 220, + 357, + 249 + ], + "spans": [ + { + "bbox": [ + 318, + 220, + 357, + 249 + ], + "type": "image", + "image_path": "5896208645b37073aab3efe314c3f400ac8e6b036116d03a0fdbb97651b7af0b.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 324, + 251, + 352, + 256 + ], + "lines": [ + { + "bbox": [ + 324, + 251, + 352, + 256 + ], + "spans": [ + { + "bbox": [ + 324, + 251, + 352, + 256 + ], + "type": "text", + "content": "RGB-D Image" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 369, + 123, + 399, + 150 + ], + "blocks": [ + { + "bbox": [ + 369, + 123, + 399, + 150 + ], + "lines": [ + { + "bbox": [ + 369, + 123, + 399, + 150 + ], + "spans": [ + { + "bbox": [ + 369, + 123, + 399, + 150 + ], + "type": "image", + "image_path": "fad6cc8d5f1c5700fd622d588e2b614926cbe42cb1b623c8d13174cb42d62cbd.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 462, + 123, + 470, + 131 + ], + "lines": [ + { + "bbox": [ + 462, + 123, + 470, + 131 + ], + "spans": [ + { + "bbox": [ + 462, + 123, + 470, + 131 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 369, + 152, + 400, + 182 + ], + "blocks": [ + { + "bbox": [ + 369, + 152, + 400, + 182 + ], + "lines": [ + { + "bbox": [ + 369, + 152, + 400, + 182 + ], + "spans": [ + { + "bbox": [ + 369, + 152, + 400, + 182 + ], + "type": "image", + "image_path": "0b0ad484f0577e985976a8e235b65620392b5c6d7eacfd0440e964eddc7a4a7e.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 470, + 132, + 476, + 133 + ], + "lines": [ + { + "bbox": [ + 470, + 132, + 476, + 133 + ], + "spans": [ + { + "bbox": [ + 470, + 132, + 476, + 133 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 363, + 196, + 404, + 211 + ], + "blocks": [ + { + "bbox": [ + 363, + 196, + 404, + 211 + ], + "lines": [ + { + "bbox": [ + 363, + 196, + 404, + 211 + ], + "spans": [ + { + "bbox": [ + 363, + 196, + 404, + 211 + ], + "type": "image", + "image_path": "8953cd4824f494189fcacdb318a2ff28443f067bcb257672b7e705f311e2e278.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 470, + 131, + 476, + 132 + ], + "lines": [ + { + "bbox": [ + 470, + 131, + 476, + 132 + ], + "spans": [ + { + "bbox": [ + 470, + 131, + 476, + 132 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 363, + 228, + 405, + 245 + ], + "blocks": [ + { + "bbox": [ + 363, + 228, + 405, + 245 + ], + "lines": [ + { + "bbox": [ + 363, + 228, + 405, + 245 + ], + "spans": [ + { + "bbox": [ + 363, + 228, + 405, + 245 + ], + "type": "image", + "image_path": "6ed528fcf3154bfea2afaaf73f86329db5ee40bba03c583f6908170f3eee21d8.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 382, + 251, + 396, + 256 + ], + "lines": [ + { + "bbox": [ + 382, + 251, + 396, + 256 + ], + "spans": [ + { + "bbox": [ + 382, + 251, + 396, + 256 + ], + "type": "text", + "content": "View 1" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_caption" + } + ], + "index": 31 + }, + { + "type": "image", + "bbox": [ + 409, + 124, + 435, + 150 + ], + "blocks": [ + { + "bbox": [ + 409, + 124, + 435, + 150 + ], + "lines": [ + { + "bbox": [ + 409, + 124, + 435, + 150 + ], + "spans": [ + { + "bbox": [ + 409, + 124, + 435, + 150 + ], + "type": "image", + "image_path": "9deb22b8382a47f89f61fc48da8a6635bc4ccfbf8b0fc1884869c86b6bfb9d1a.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 470, + 124, + 476, + 131 + ], + "lines": [ + { + "bbox": [ + 470, + 124, + 476, + 131 + ], + "spans": [ + { + "bbox": [ + 470, + 124, + 476, + 131 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 411, + 152, + 435, + 180 + ], + "blocks": [ + { + "bbox": [ + 411, + 152, + 435, + 180 + ], + "lines": [ + { + "bbox": [ + 411, + 152, + 435, + 180 + ], + "spans": [ + { + "bbox": [ + 411, + 152, + 435, + 180 + ], + "type": "image", + "image_path": "0aeb28b1ba28af13db394d47e3a4dfc0205173c011559e05bbdf63953a621248.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 476, + 124, + 484, + 131 + ], + "lines": [ + { + "bbox": [ + 476, + 124, + 484, + 131 + ], + "spans": [ + { + "bbox": [ + 476, + 124, + 484, + 131 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 408, + 186, + 474, + 218 + ], + "blocks": [ + { + "bbox": [ + 408, + 186, + 474, + 218 + ], + "lines": [ + { + "bbox": [ + 408, + 186, + 474, + 218 + ], + "spans": [ + { + "bbox": [ + 408, + 186, + 474, + 218 + ], + "type": "image", + "image_path": "86a2388cacd2e8b790626a9c2f4067cd7010e98bb4b617ccea2dc1eaa1bb8da2.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + } + ], + "index": 35 + }, + { + "type": "image", + "bbox": [ + 408, + 220, + 458, + 248 + ], + "blocks": [ + { + "bbox": [ + 408, + 220, + 458, + 248 + ], + "lines": [ + { + "bbox": [ + 408, + 220, + 458, + 248 + ], + "spans": [ + { + "bbox": [ + 408, + 220, + 458, + 248 + ], + "type": "image", + "image_path": "0333414c5c0ff42decbae0b3ed611615caa88e20cf5585a7bde0a9db8ae22618.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 418, + 250, + 432, + 255 + ], + "lines": [ + { + "bbox": [ + 418, + 250, + 432, + 255 + ], + "spans": [ + { + "bbox": [ + 418, + 250, + 432, + 255 + ], + "type": "text", + "content": "View 2" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_caption" + } + ], + "index": 36 + }, + { + "bbox": [ + 476, + 131, + 483, + 132 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 131, + 483, + 132 + ], + "spans": [ + { + "bbox": [ + 476, + 131, + 483, + 132 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 41, + "type": "text" + }, + { + "bbox": [ + 476, + 131, + 483, + 132 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 131, + 483, + 132 + ], + "spans": [ + { + "bbox": [ + 476, + 131, + 483, + 132 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 42, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "type": "text", + "content": "tation methods to obtain instance-level completed shapes. Third, our method does not handle uncertainty of surface prediction explicitly. In future work, we plan to extend our method to model uncertainty to improve the scene completion quality and diversity." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 138, + 201, + 482, + 564 + ], + "blocks": [ + { + "bbox": [ + 138, + 201, + 482, + 564 + ], + "lines": [ + { + "bbox": [ + 138, + 201, + 482, + 564 + ], + "spans": [ + { + "bbox": [ + 138, + 201, + 482, + 564 + ], + "type": "image", + "image_path": "46bb813236a96528212b701e87d023be165550cc2ab3ec2f57f5f3c7ac365784.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 574, + 482, + 631 + ], + "lines": [ + { + "bbox": [ + 130, + 574, + 482, + 631 + ], + "spans": [ + { + "bbox": [ + 130, + 574, + 482, + 631 + ], + "type": "text", + "content": "Fig. 8: Comparisons on HomebrewedDB dataset (Top), and HOPE (Bottom) datasets. For better visibility, we show the generated and ground truth shapes. The top and bottom rows show an image from near camera and back views respectively. Compared to the other methods, our method predicts accurate and consistent shapes on a challenging scene completion task for novel objects." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 234, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 234, + 129 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 234, + 129 + ], + "type": "text", + "content": "Acknowledgment" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 140, + 479, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 140, + 479, + 152 + ], + "spans": [ + { + "bbox": [ + 132, + 140, + 479, + 152 + ], + "type": "text", + "content": "We thank Zubair Irshad and Jenny Nan for valuable feedback and comments." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 153, + 382, + 165 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 153, + 382, + 165 + ], + "spans": [ + { + "bbox": [ + 132, + 153, + 382, + 165 + ], + "type": "text", + "content": "This research is supported by Toyota Research Institute." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 185, + 197, + 198 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 185, + 197, + 198 + ], + "spans": [ + { + "bbox": [ + 133, + 185, + 197, + 198 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 211, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 138, + 211, + 480, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 211, + 480, + 233 + ], + "spans": [ + { + "bbox": [ + 138, + 211, + 480, + 233 + ], + "type": "text", + "content": "1. Boulch, A., Marlet, R.: POCO: Point Convolution for Surface Reconstruction. In: CVPR (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 234, + 480, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 234, + 480, + 255 + ], + "spans": [ + { + "bbox": [ + 138, + 234, + 480, + 255 + ], + "type": "text", + "content": "2. Bozic, A., Palafox, P., Thies, J., Dai, A., Nießner, M.: TransformerFusion: Monocular rgb scene reconstruction using transformers. In: NeurIPS (2021)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 256, + 480, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 256, + 480, + 289 + ], + "spans": [ + { + "bbox": [ + 138, + 256, + 480, + 289 + ], + "type": "text", + "content": "3. Chan, E.R., Nagano, K., Chan, M.A., Bergman, A.W., Park, J.J., Levy, A., Aittala, M., Mello, S.D., Karras, T., Wetzstein, G.: GeNVS: Generative novel view synthesis with 3D-aware diffusion models. In: CoRR (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 289, + 480, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 289, + 480, + 311 + ], + "spans": [ + { + "bbox": [ + 138, + 289, + 480, + 311 + ], + "type": "text", + "content": "4. Chen, H.X., Huang, J., Mu, T.J., Hu, S.M.: CIRCLE: Convolutional Implicit Reconstruction And Completion For Large-Scale Indoor Scene. In: ECCV (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 312, + 480, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 312, + 480, + 333 + ], + "spans": [ + { + "bbox": [ + 138, + 312, + 480, + 333 + ], + "type": "text", + "content": "5. Cheng, Y.C., Lee, H.Y., Tulyakov, S., Schwing, A.G., Gui, L.Y.: SDFusion: Multimodal 3d shape completion, reconstruction, and generation. In: CVPR (2023)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 334, + 480, + 355 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 334, + 480, + 355 + ], + "spans": [ + { + "bbox": [ + 138, + 334, + 480, + 355 + ], + "type": "text", + "content": "6. Choy, C., Gwak, J., Savarese, S.: 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. In: CVPR (2019)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 355, + 480, + 377 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 355, + 480, + 377 + ], + "spans": [ + { + "bbox": [ + 138, + 355, + 480, + 377 + ], + "type": "text", + "content": "7. Chu, X., Tian, Z., Zhang, B., Wang, X., Shen, C.: Conditional Positional Encodings for Vision Transformers. In: ICLR (2023)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 378, + 480, + 399 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 378, + 480, + 399 + ], + "spans": [ + { + "bbox": [ + 138, + 378, + 480, + 399 + ], + "type": "text", + "content": "8. Computer, T.: RedPajama: an Open Dataset for Training Large Language Models (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 399, + 480, + 422 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 399, + 480, + 422 + ], + "spans": [ + { + "bbox": [ + 138, + 399, + 480, + 422 + ], + "type": "text", + "content": "9. Dai, A., Diller, C., Nießner, M.: SG-NN: Sparse generative neural networks for self-supervised scene completion of rgb-d scans. In: CVPR (2020)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 422, + 480, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 422, + 480, + 455 + ], + "spans": [ + { + "bbox": [ + 138, + 422, + 480, + 455 + ], + "type": "text", + "content": "10. Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., Nießner, M.: ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans. In: CVPR (2018)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 456, + 480, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 456, + 480, + 477 + ], + "spans": [ + { + "bbox": [ + 138, + 456, + 480, + 477 + ], + "type": "text", + "content": "1. Dao, T.: FlashAttention-2: Faster attention with better parallelism and work partitioning (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 478, + 480, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 478, + 480, + 510 + ], + "spans": [ + { + "bbox": [ + 138, + 478, + 480, + 510 + ], + "type": "text", + "content": "2. Deitke, M., Schwenk, D., Salvador, J., Weihs, L., Michel, O., VanderBilt, E., Schmidt, L., Ehsani, K., Kembhavi, A., Farhadi, A.: Objaverse: A Universe of Annotated 3D Objects. CVPR (2022)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 510, + 480, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 510, + 480, + 544 + ], + "spans": [ + { + "bbox": [ + 138, + 510, + 480, + 544 + ], + "type": "text", + "content": "3. Denninger, M., Winkelbauer, D., Sundermeyer, M., Boerdijk, W., Knauer, M., Strobl, K.H., Humt, M., Triebel, R.: BlenderProc2: A Procedural Pipeline for Photorealistic Rendering. Journal of Open Source Software (2023)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 544, + 480, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 544, + 480, + 565 + ], + "spans": [ + { + "bbox": [ + 138, + 544, + 480, + 565 + ], + "type": "text", + "content": "4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: NAACL (2019)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 138, + 566, + 480, + 609 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 566, + 480, + 609 + ], + "spans": [ + { + "bbox": [ + 138, + 566, + 480, + 609 + ], + "type": "text", + "content": "5. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR (2021)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 138, + 610, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 610, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 138, + 610, + 480, + 643 + ], + "type": "text", + "content": "6. Downs, L., Francis, A., Koenig, N., Kinman, B., Hickman, R., Reymann, K., McHugh, T.B., Vanhoucke, V.: Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items. In: ICRA (2022)" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "type": "text", + "content": "7. Duan, Y., Zhu, H., Wang, H., Yi, L., Nevatia, R., Guibas, L.J.: Curriculum deepsdf. In: ECCV (2020)" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 481, + 666 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 133, + 116, + 481, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 481, + 149 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 481, + 149 + ], + "type": "text", + "content": "18. Dupont, E., Kim, H., Eslami, S.M.A., Rezende, D.J., Rosenbaum, D.: From data to functa: Your data point is a function and you can treat it like one. In: ICML (2022)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 149, + 481, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 149, + 481, + 171 + ], + "spans": [ + { + "bbox": [ + 132, + 149, + 481, + 171 + ], + "type": "text", + "content": "19. Gao, P., Ma, T., Li, H., Dai, J., Qiao, Y.: ConvMAE: Masked Convolution Meets Masked Autoencoders. NeurIPS (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 171, + 481, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 171, + 481, + 203 + ], + "spans": [ + { + "bbox": [ + 132, + 171, + 481, + 203 + ], + "type": "text", + "content": "20. Goldblum, M., Finzi, M., Rowan, K., Wilson, A.G.: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning. CoRR (2023)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 203, + 481, + 224 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 203, + 481, + 224 + ], + "spans": [ + { + "bbox": [ + 132, + 203, + 481, + 224 + ], + "type": "text", + "content": "21. Graham, B., Engelcke, M., van der Maaten, L.: 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. CVPR (2018)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 224, + 481, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 224, + 481, + 246 + ], + "spans": [ + { + "bbox": [ + 132, + 224, + 481, + 246 + ], + "type": "text", + "content": "22. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: CVPR (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 246, + 481, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 246, + 481, + 267 + ], + "spans": [ + { + "bbox": [ + 132, + 246, + 481, + 267 + ], + "type": "text", + "content": "23. Hou, J., Dai, A., Nießner, M.: RevealNet: Seeing Behind Objects in RGB-D Scans. In: CVPR (2020)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 267, + 481, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 267, + 481, + 289 + ], + "spans": [ + { + "bbox": [ + 132, + 267, + 481, + 289 + ], + "type": "text", + "content": "24. Huang, J., Gojcic, Z., Atzmon, M., Litany, O., Fidler, S., Williams, F.: Neural Kernel Surface Reconstruction. In: CVPR (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 289, + 481, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 289, + 481, + 320 + ], + "spans": [ + { + "bbox": [ + 132, + 289, + 481, + 320 + ], + "type": "text", + "content": "25. Irshad, M.Z., Zakharov, S., Ambrus, R., Kollar, T., Kira, Z., Gaidon, A.: Shapo: Implicit representations for multi-object shape, appearance, and pose optimization. In: ECCV (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 320, + 481, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 320, + 481, + 354 + ], + "spans": [ + { + "bbox": [ + 132, + 320, + 481, + 354 + ], + "type": "text", + "content": "26. Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.: Real-time Perception meets Reactive Motion Generation. RA-L (2018)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 354, + 481, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 354, + 481, + 374 + ], + "spans": [ + { + "bbox": [ + 132, + 354, + 481, + 374 + ], + "type": "text", + "content": "27. Karaman, S., Frazzoli, E.: Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Rob. Res. (2011)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 374, + 481, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 374, + 481, + 396 + ], + "spans": [ + { + "bbox": [ + 132, + 374, + 481, + 396 + ], + "type": "text", + "content": "28. Kaskman, R., Zakharov, S., Shugurov, I., Ilic, S.: HomebrewedDB: RGB-D Dataset for 6D Pose Estimation of 3D Objects. ICCVW (2019)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 396, + 481, + 418 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 396, + 481, + 418 + ], + "spans": [ + { + "bbox": [ + 132, + 396, + 481, + 418 + ], + "type": "text", + "content": "29. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 418, + 481, + 450 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 418, + 481, + 450 + ], + "spans": [ + { + "bbox": [ + 132, + 418, + 481, + 450 + ], + "type": "text", + "content": "30. Labbé, Y., Manuelli, L., Mousavian, A., Tyree, S., Birchfield, S., Tremblay, J., Carpentier, J., Aubry, M., Fox, D., Sivic, J.: MegaPose: 6d pose estimation of novel objects via render & compare. In: CoRL (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 450, + 481, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 450, + 481, + 472 + ], + "spans": [ + { + "bbox": [ + 132, + 450, + 481, + 472 + ], + "type": "text", + "content": "31. Li, J., Han, K., Wang, P., Liu, Y., Yuan, X.: Anisotropic Convolutional Networks for 3D Semantic Scene Completion. In: CVPR (2020)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 472, + 481, + 504 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 472, + 481, + 504 + ], + "spans": [ + { + "bbox": [ + 132, + 472, + 481, + 504 + ], + "type": "text", + "content": "32. Li, J., Liu, Y., Gong, D., Shi, Q., Yuan, X., Zhao, C., Reid, I.: RGBD Based Dimensional Decomposition Residual Network for 3D Semantic Scene Completion. In: CVPR. pp. 7693-7702 (June 2019)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 504, + 481, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 504, + 481, + 536 + ], + "spans": [ + { + "bbox": [ + 132, + 504, + 481, + 536 + ], + "type": "text", + "content": "33. Li*, L.H., Zhang*, P., Zhang*, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J.N., Chang, K.W., Gao, J.: Grounded language-image pretraining. In: CVPR (2022)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 536, + 481, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 536, + 481, + 567 + ], + "spans": [ + { + "bbox": [ + 132, + 536, + 481, + 567 + ], + "type": "text", + "content": "34. Li, Y., Yu, Z., Choy, C., Xiao, C., Alvarez, J.M., Fidler, S., Feng, C., Anandkumar, A.: VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion. In: CVPR (2023)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 568, + 481, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 568, + 481, + 601 + ], + "spans": [ + { + "bbox": [ + 132, + 568, + 481, + 601 + ], + "type": "text", + "content": "35. Liang, F., Wu, B., Dai, X., Li, K., Zhao, Y., Zhang, H., Zhang, P., Vajda, P., Marculescu, D.: Open-vocabulary semantic segmentation with mask-adapted clip. In: CVPR (2023)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 601, + 481, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 601, + 481, + 622 + ], + "spans": [ + { + "bbox": [ + 132, + 601, + 481, + 622 + ], + "type": "text", + "content": "36. Lin, Y., Tremblay, J., Tyree, S., Vela, P.A., Birchfield, S.: Multi-view Fusion for Multi-level Robotic Scene Understanding. In: IROS (2021)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 132, + 622, + 481, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 622, + 481, + 643 + ], + "spans": [ + { + "bbox": [ + 132, + 622, + 481, + 643 + ], + "type": "text", + "content": "37. Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural Sparse Voxel Fields. NeurIPS (2020)" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 132, + 643, + 481, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 643, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 132, + 643, + 481, + 666 + ], + "type": "text", + "content": "38. Liu, M., Xu, C., Jin, H., Chen, L., Xu, Z., Su, H., et al.: One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. NeurIPS (2023)" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 100 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 480, + 138 + ], + "type": "text", + "content": "39. Liu, R., Wu, R., Hoorick, B.V., Tokmakov, P., Zakharov, S., Vondrick, C.: Zero-1-to-3: Zero-shot One Image to 3D Object. In: CVPR (2023)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 138, + 480, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 138, + 480, + 159 + ], + "spans": [ + { + "bbox": [ + 130, + 138, + 480, + 159 + ], + "type": "text", + "content": "40. Liu, Z., Feng, Y., Black, M.J., Nowrouzezahrai, D., Paull, L., Liu, W.: MeshDiffusion: Score-based Generative 3D Mesh Modeling. In: ICLR (2023)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 160, + 480, + 181 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 160, + 480, + 181 + ], + "spans": [ + { + "bbox": [ + 130, + 160, + 480, + 181 + ], + "type": "text", + "content": "41. Lorensen, W.E., Cline, H.E.: Marching Cubes: A High Resolution 3D Surface Construction Algorithm. SIGGRAPH (1987)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 182, + 480, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 182, + 480, + 203 + ], + "spans": [ + { + "bbox": [ + 130, + 182, + 480, + 203 + ], + "type": "text", + "content": "42. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy Networks: Learning 3D Reconstruction in Function Space. In: CVPR (2019)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 203, + 480, + 224 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 203, + 480, + 224 + ], + "spans": [ + { + "bbox": [ + 130, + 203, + 480, + 224 + ], + "type": "text", + "content": "43. Mittal, P., Cheng, Y.C., Singh, M., Tulsiani, S.: AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation. In: CVPR (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 224, + 480, + 256 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 224, + 480, + 256 + ], + "spans": [ + { + "bbox": [ + 130, + 224, + 480, + 256 + ], + "type": "text", + "content": "44. Mohammadi, S.S., Duarte, N.F., Dimou, D., Wang, Y., Taiana, M., Morerio, P., Dehban, A., Moreno, P., Bernardino, A., Del Bue, A., Santos-Victor, J.: 3DSGrasp: 3D Shape-Completion for Robotic Grasp. In: ICRA (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 257, + 480, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 257, + 480, + 267 + ], + "spans": [ + { + "bbox": [ + 130, + 257, + 480, + 267 + ], + "type": "text", + "content": "45. Museth, K.: VDB: High-resolution sparse volumes with dynamic topology (2013)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 267, + 480, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 267, + 480, + 289 + ], + "spans": [ + { + "bbox": [ + 130, + 267, + 480, + 289 + ], + "type": "text", + "content": "46. Okumura, K., Défago, X.: Quick Multi-Robot Motion Planning by Combining Sampling and Search. In: IJCAI (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 289, + 480, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 480, + 320 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 480, + 320 + ], + "type": "text", + "content": "47. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In: CVPR (2019)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 321, + 480, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 321, + 480, + 342 + ], + "spans": [ + { + "bbox": [ + 130, + 321, + 480, + 342 + ], + "type": "text", + "content": "48. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional Occupancy Networks. In: ECCV (2020)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 342, + 480, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 342, + 480, + 354 + ], + "spans": [ + { + "bbox": [ + 130, + 342, + 480, + 354 + ], + "type": "text", + "content": "49. Rabe, M.N., Staats, C.: Self-attention Does Not Need " + }, + { + "bbox": [ + 130, + 342, + 480, + 354 + ], + "type": "inline_equation", + "content": "O(n^{2})" + }, + { + "bbox": [ + 130, + 342, + 480, + 354 + ], + "type": "text", + "content": " Memory (2021)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 354, + 480, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 354, + 480, + 385 + ], + "spans": [ + { + "bbox": [ + 130, + 354, + 480, + 385 + ], + "type": "text", + "content": "50. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 130, + 386, + 480, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 386, + 480, + 407 + ], + "spans": [ + { + "bbox": [ + 130, + 386, + 480, + 407 + ], + "type": "text", + "content": "51. Radford, A., Narasimhan, K.: Improving Language Understanding by Generative Pre-Training (2018)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 407, + 480, + 439 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 407, + 480, + 439 + ], + "spans": [ + { + "bbox": [ + 130, + 407, + 480, + 439 + ], + "type": "text", + "content": "52. Reizenstein, J., Shapovalov, R., Henzler, P., Sbordone, L., Labatut, P., Novotny, D.: Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction. In: ICCV (2021)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 130, + 439, + 480, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 439, + 480, + 460 + ], + "spans": [ + { + "bbox": [ + 130, + 439, + 480, + 460 + ], + "type": "text", + "content": "53. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-Resolution Image Synthesis with Latent Diffusion Models (2021)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 461, + 480, + 493 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 461, + 480, + 493 + ], + "spans": [ + { + "bbox": [ + 130, + 461, + 480, + 493 + ], + "type": "text", + "content": "54. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortzman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS (2022)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 130, + 493, + 480, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 493, + 480, + 515 + ], + "spans": [ + { + "bbox": [ + 130, + 493, + 480, + 515 + ], + "type": "text", + "content": "55. Shao, T., Yang, Y., Weng, Y., Hou, Q., Zhou, K.: H-CNN: Spatial Hashing Based CNN for 3D Shape Analysis. TVCG (2020)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 130, + 515, + 480, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 515, + 480, + 536 + ], + "spans": [ + { + "bbox": [ + 130, + 515, + 480, + 536 + ], + "type": "text", + "content": "56. Shen, T., Gao, J., Yin, K., Liu, M.Y., Fidler, S.: Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis. In: NeurIPS (2021)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 130, + 536, + 480, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 536, + 480, + 557 + ], + "spans": [ + { + "bbox": [ + 130, + 536, + 480, + 557 + ], + "type": "text", + "content": "57. Shi, Z., Zhou, X., Qiu, X., Zhu, X.: Improving image captioning with better use of captions. CoRR (2020)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 130, + 558, + 480, + 579 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 480, + 579 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 480, + 579 + ], + "type": "text", + "content": "58. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic Scene Completion from a Single Depth Image. CVPR (2017)" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 130, + 579, + 480, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 579, + 480, + 601 + ], + "spans": [ + { + "bbox": [ + 130, + 579, + 480, + 601 + ], + "type": "text", + "content": "59. Su, J., Lu, Y., Pan, S., Wen, B., Liu, Y.: RoFormer: Enhanced Transformer with Rotary Position Embedding. In: ICLR (2020)" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 130, + 601, + 480, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 601, + 480, + 622 + ], + "spans": [ + { + "bbox": [ + 130, + 601, + 480, + 622 + ], + "type": "text", + "content": "60. Varley, J., DeChant, C., Richardson, A., Ruales, J., Allen, P.: Shape completion enabled robotic grasping. In: IROS (2017)" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 130, + 622, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 622, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 130, + 622, + 480, + 643 + ], + "type": "text", + "content": "61. Wang, P.S.: OctFormer: Octree-based Transformers for 3D Point Clouds. SIGGRAPH (2023)" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 130, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 643, + 480, + 665 + ], + "type": "text", + "content": "62. Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: O-CNN: Octree-Based Convolutional Neural Networks for 3D Shape Analysis. SIGGRAPH (2017)" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 274, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-Shot Multi-Object Scene Completion" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 501 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 132, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 480, + 138 + ], + "type": "text", + "content": "63. Wang, P.S., Liu, Y., Tong, X.: Deep Octree-based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion. In: CVPRW (2020)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 140, + 480, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 140, + 480, + 160 + ], + "spans": [ + { + "bbox": [ + 132, + 140, + 480, + 160 + ], + "type": "text", + "content": "64. Watson, D., Chan, W., Martin-Brualla, R., Ho, J., Tagliasacchi, A., Norouzi, M.: Novel View Synthesis with Diffusion Models. CoRR (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 161, + 480, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 161, + 480, + 182 + ], + "spans": [ + { + "bbox": [ + 133, + 161, + 480, + 182 + ], + "type": "text", + "content": "65. Williams, F., Gojcic, Z., Khamis, S., Zorin, D., Bruna, J., Fidler, S., Litany, O.: Neural Fields as Learnable Kernels for 3D Reconstruction. In: CVPR (2022)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 183, + 480, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 183, + 480, + 205 + ], + "spans": [ + { + "bbox": [ + 132, + 183, + 480, + 205 + ], + "type": "text", + "content": "66. Wu, C.Y., Johnson, J., Malik, J., Feichtenhofer, C., Gkioxari, G.: Multiview Compressive Coding for 3D Reconstruction. In: CVPR (2023)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 205, + 480, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 205, + 480, + 226 + ], + "spans": [ + { + "bbox": [ + 132, + 205, + 480, + 226 + ], + "type": "text", + "content": "67. Wu, X., Lao, Y., Jiang, L., Liu, X., Zhao, H.: Point transformer V2: Grouped Vector Attention and Partition-based Pooling. In: NeurIPS (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 227, + 480, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 227, + 480, + 248 + ], + "spans": [ + { + "bbox": [ + 132, + 227, + 480, + 248 + ], + "type": "text", + "content": "68. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes (2018)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 249, + 480, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 249, + 480, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 249, + 480, + 270 + ], + "type": "text", + "content": "69. Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated Residual Transformations for Deep Neural Networks. CVPR (2017)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 133, + 271, + 480, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 271, + 480, + 303 + ], + "spans": [ + { + "bbox": [ + 133, + 271, + 480, + 303 + ], + "type": "text", + "content": "70. Xu, J., Liu, S., Vahdat, A., Byeon, W., Wang, X., De Mello, S.: ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models. CVPR (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 304, + 480, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 304, + 480, + 335 + ], + "spans": [ + { + "bbox": [ + 132, + 304, + 480, + 335 + ], + "type": "text", + "content": "71. Yan, X., Lin, L., Mitra, N.J., Lischinski, D., Cohen-Or, D., Huang, H.: Shape-Former: Transformer-based Shape Completion via Sparse Representation. In: CVPR (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 336, + 480, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 336, + 480, + 357 + ], + "spans": [ + { + "bbox": [ + 132, + 336, + 480, + 357 + ], + "type": "text", + "content": "72. Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers. In: ICCV (2021)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 358, + 480, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 358, + 480, + 379 + ], + "spans": [ + { + "bbox": [ + 132, + 358, + 480, + 379 + ], + "type": "text", + "content": "73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. CVPR (2022)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 380, + 480, + 401 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 380, + 480, + 401 + ], + "spans": [ + { + "bbox": [ + 132, + 380, + 480, + 401 + ], + "type": "text", + "content": "74. Zhang, D., Choi, C., Park, I., Kim, Y.M.: Probabilistic Implicit Scene Completion. In: ICLR (2022)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 402, + 480, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 402, + 480, + 435 + ], + "spans": [ + { + "bbox": [ + 132, + 402, + 480, + 435 + ], + "type": "text", + "content": "75. Zhang, H., Zhang, P., Hu, X., Chen, Y.C., Li, L.H., Dai, X., Wang, L., Yuan, L., Hwang, J.N., Gao, J.: GLIPv2: Unifying Localization and Vision-Language Understanding. CoRR (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 435, + 480, + 456 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 435, + 480, + 456 + ], + "spans": [ + { + "bbox": [ + 132, + 435, + 480, + 456 + ], + "type": "text", + "content": "76. Zhang, P., Liu, W., Lei, Y., Lu, H., Yang, X.: Cascaded Context Pyramid for Full-Resolution 3D Semantic Scene Completion. In: ICCV (2019)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 457, + 480, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 457, + 480, + 478 + ], + "spans": [ + { + "bbox": [ + 132, + 457, + 480, + 478 + ], + "type": "text", + "content": "77. Zhao, H., Jiang, L., Jia, J., Torr, P.H., Koltun, V.: Point transformer. In: ICCV (2021)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 479, + 480, + 501 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 479, + 480, + 501 + ], + "spans": [ + { + "bbox": [ + 132, + 479, + 480, + 501 + ], + "type": "text", + "content": "78. Zhu, Y., Tian, Y., Mexatas, D., Dollar, P.: Semantic Amodal Segmentation. In: CVPR (2017)" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 223, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 223, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 223, + 101 + ], + "type": "text", + "content": "S. Iwase et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_content_list.json b/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..985c8b2dff7aa03134b1cc83b61c7047d0c41ddb --- /dev/null +++ b/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_content_list.json @@ -0,0 +1,1742 @@ +[ + { + "type": "text", + "text": "Zero-shot Object Counting with Good Exemplars", + "text_level": 1, + "bbox": [ + 217, + 141, + 782, + 162 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Huilin Zhu $^{1,2,3,\\dagger}$ , Jingling Yuan $^{1,2,\\dagger}$ , Zhengwei Yang $^{4,\\dagger}$ , Yu Guo $^{3,5}$ , Zheng Wang $^{4}$ , Xian Zhong $^{1,2,6(\\text{四})}$ , and Shengfeng He $^{3(\\text{四})}$", + "bbox": [ + 240, + 188, + 763, + 220 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1 Sanya Science and Education Innovation Park, Wuhan University of Technology", + "2 Hubei Key Laboratory of Transportation Internet of Things, School of Computer Science and Artificial Intelligence, Wuhan University of Technology" + ], + "bbox": [ + 225, + 231, + 776, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "zhongx@whut.edu.cn", + "bbox": [ + 428, + 275, + 573, + 287 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3 School of Computing and Information Systems, Singapore Management University shengfenghe@smu.edu.sg", + "bbox": [ + 218, + 287, + 781, + 315 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "$^{4}$ School of Computer Science, Wuhan University", + "5 School of Navigation, Wuhan University of Technology", + "$^{6}$ ROSE@EEE, Nanyang Technological University" + ], + "bbox": [ + 313, + 315, + 689, + 357 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Equal Contribution", + "bbox": [ + 436, + 357, + 571, + 369 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "https://github.com/HopooLinZ/VA-Count", + "bbox": [ + 356, + 371, + 643, + 383 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations. However, a critical challenge in current ZOC methods lies in their inability to identify high-quality exemplars effectively. This deficiency hampers scalability across diverse classes and undermines the development of strong visual associations between the identified classes and image content. To this end, we propose the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count consists of an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM) that synergistically refine the process of class exemplar identification while minimizing the consequences of incorrect object identification. The EEM utilizes advanced vision-language pre-taining models to discover potential exemplars, ensuring the framework's adaptability to various classes. Meanwhile, the NSM employs contrastive learning to differentiate between optimal and suboptimal exemplar pairs, reducing the negative effects of erroneous exemplars. VA-Count demonstrates its effectiveness and scalability in zero-shot contexts with superior performance on two object counting datasets.", + "bbox": [ + 259, + 416, + 743, + 667 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 215, + 689, + 375, + 704 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In visual monitoring applications, object counting plays a critical role in analyzing images or videos. Traditional methods focus on high precision within predefined object categories, such as crowds [4, 23], vehicles, and cells [1, 34, 39, 40, 44]. Yet, these methods are limited to specific categories, lacking the flexibility to adapt to new, unseen classes. To address these challenges, class-agnostic methods have been developed for scenarios with unseen classes. These methods, including few-shot, reference-free, and zero-shot object counting [12, 32, 35, 46, 47], provide varying levels of independence from predefined object classes.", + "bbox": [ + 212, + 719, + 787, + 840 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/5711ecdb9fded11199d37d21250d794eee6570aa10a0f84b2e75684181b3e47e.jpg", + "image_caption": [ + "Fig. 1: Illustration of class-agnostic object counting methods. (a) Few-shot uses limited annotations for counting. (b) Reference-free quantifies objects without annotations. (c) Zero-shot counts specific classes without annotations, further divided into: (c1) Image-text association, leveraging direct image-text correlations. (c2) Class-related exemplar search, using prototypes to link classes with images. (c3) Our method introduces a detection-driven exemplar discovery to harmonize text with visual representations, distinguishing it from prior methods." + ], + "image_footnote": [], + "bbox": [ + 218, + 146, + 774, + 369 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this context, different strategies are adopted for object counting under varying constraints, as illustrated in Fig. 1. Few-shot counting methods [29,46,47], depicted in Fig. 1(a), method the task as a matching problem, using a small number of annotated bounding boxes to identify and count objects throughout the image. While effective, this method requires fine-tuning with annotations from novel classes, limiting its scalability in real-world surveillance settings due to the sparse availability of annotated bounding boxes. To circumvent the limitations of bounding box annotations, reference-free counting methods are developed [10,19,32,41], as shown in Fig. 1(b). These methods aim to ascertain the total number of objects in an image without relying on specific cues. Nevertheless, the lack of specificity in counting categories makes these methods prone to errors induced by background noise, as they indiscriminately count all visible objects, leading to a lack of control in the counting process.", + "bbox": [ + 212, + 518, + 787, + 717 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In pursuit of more scalable and realistic counting solutions, zero-shot methods [3, 45, 49], illustrated in Fig. 1(c), are introduced. These techniques are designed to count objects from specified classes within an image without prior annotations for those classes, addressing the limitations of both few-shot and reference-free methods by providing enhanced specificity and scalability. These methods can be categorized into two streams. The initial method [13, 14] leans on image-text alignment to comprehend object-related correlations without needing physical exemplars. This method enhances scalability for unidentified classes but", + "bbox": [ + 212, + 719, + 787, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 126 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "struggles with adequately representing image details for target classes, especially those with atypical shapes, as demonstrated in Fig. 1(c1). Conversely, the second method [45] concentrates on identifying objects through the discovery of class-relevant exemplars. This is achieved by creating pseudo labels that assess the resemblance between image patches and class-generated prototypes. Nevertheless, this method's reliance on arbitrary patch selection hampers its ability to accurately outline entire objects. Additionally, the absence of direct text-image engagement restricts its scalability, tethered to the pre-defined categories present in the training dataset, as illustrated in Fig. 1(c2).", + "bbox": [ + 212, + 146, + 787, + 282 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As shown in Fig. 1(c3), we introduce the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count aims to create a robust link between specific object categories and their corresponding visual representations, ensuring adaptability to various classes. This framework is anchored by three core principles. First, it prioritizes flexibility and scalability, enabling adaptation to novel classes beyond its initial parameters. Second, it enhances precision in identifying exemplary objects, strengthening the connection between visual depictions and their categories. Third, it devises strategies to reduce the effects of localization errors on counting precision. Building on these principles, VA-Count integrates an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively.", + "bbox": [ + 212, + 282, + 787, + 464 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In detail, the EEM expands VA-Count's capacity to handle various classes through the integration of Vision-Language Pretaining (VLP) models, such as Grounding DINO [20]. These VLP models, trained on extensive datasets, excel in identifying a wide range of classes by defining specific categories. In the context of ZOC, it is essential to select exemplars that each contain precisely one object from among the potential bounding boxes that might encompass varying object quantities. To this end, we deploy a binary filter aimed at rigorously refining the set of candidate exemplars, excluding those that fail to comply with the single-object requirement. This filtration step is pivotal for ensuring the precision and consistency necessary for ZOC.", + "bbox": [ + 212, + 464, + 787, + 616 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Moreover, even when potential exemplars accurately represent single objects, the unintentional inclusion of exemplars not pertaining to the target category poses a persistent problem. This misalignment introduces uncertainty into the learning process that associates exemplars with images. To counteract this issue, the NSM module operates as a safeguard by identifying negative exemplars, which are unrelated to the intended category. Contrasting with the EEM, which focuses on selecting ideal samples to foster visual connections with images, the NSM employs samples from irrelevant classes to build these associations, utilizing contrastive learning to differentiate between them. This method of contrastive learning acts as a rectifying mechanism, markedly improving the accuracy and efficiency of the associative learning framework.", + "bbox": [ + 212, + 616, + 787, + 781 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In summary, our contributions are threefold:", + "bbox": [ + 238, + 782, + 563, + 797 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We introduce a Visual Association-based Zero-shot Object Counting framework, which facilitates high-quality exemplar identification for any class", + "bbox": [ + 225, + 809, + 787, + 840 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 732, + 130 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "without needing annotated examples and forges robust visual connections between objects and images.", + "bbox": [ + 240, + 146, + 784, + 176 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "- We propose an exemplar enhancement model leveraging the universal class-agnostic detection capabilities of the Vision-Language Pretaining model for precise exemplar selection, and a Noise Suppression Module to minimize the adverse effects of incorrect samples in visual associative learning.", + "bbox": [ + 225, + 176, + 785, + 236 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "- Extensive experiments conducted on two object counting datasets demonstrate the state-of-the-art accuracy and generalizability of VA-Count, underscoring its notable scalability.", + "bbox": [ + 225, + 238, + 785, + 282 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 215, + 306, + 387, + 321 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.1 Class-Specific Object Counting", + "text_level": 1, + "bbox": [ + 215, + 340, + 517, + 356 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Object counting plays a crucial role in public safety, public administration, and the liberation of human labor. Currently, class-specific object counting [22,32, 35,46,47] is the predominant method, which entails identifying specific object categories (such as humans [21,24,31,50,51], vehicles [28,48], fishes [38], cells [40], etc.) leveraging object detection or density estimation and counting accordingly. While these methods show excellence within close-set scenarios with a fixed number of categories, transferring them to arbitrary categories poses challenges. Introducing novel categories necessitates retraining or fine-tuning a counting model with new data, which limits their applicability in real scenarios.", + "bbox": [ + 212, + 366, + 787, + 503 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.2 Class-Agnostic Object Counting", + "text_level": 1, + "bbox": [ + 215, + 526, + 527, + 542 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Class-agnostic object counting [8, 26, 29, 36, 42] is proposed for scenarios with less data, which can be divided into few-shot and zero-shot depending on the annotation usage. Specifically, GMN [26] initially frames the class-agnostic counting task as a matching task, leading to FamNet [33], which implements ROI Pooling for broad applicability across FSC-147. As multi-class datasets emerged, the focus shifts towards few-shot methods, where LOCA [41] enhances feature representation and exemplar adaptation; and CounTR [19] utilizes transformers for scalable counting with a two-stage training model. BMNet [?] innovates with a bilinear matching network for refined object similarity assessments. In the realm of zero-shot methods, which are categorized into two types, methods like ZSC [45] leverage textual inputs to generate prototypes and filter image patches, thus reducing the need for extensive labeling, albeit with fixed generators that limit scalability. CLIP-Count [13] employs CLIP to encode text and images separately, establishing semantic associations crucial for intuitive counting. VL-Count [14] takes this further by enhancing CLIP's text-image association learning specifically for object counting. Additionally, PseCo [12] introduces a SAM-based multi-task framework that achieves segmentation, dot mapping, and detection on counting data, offering broad application prospects but also necessitating greater computational resources.", + "bbox": [ + 212, + 551, + 787, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 127 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/7deda26ca26686abed708e110485281bf583700896371b1c67045eaac55f7beb.jpg", + "image_caption": [ + "Fig. 2: Overview of the proposed method. Proposed method focuses on two main elements: the Exemplar Enhancement Module (EEM) for improving exemplar quality through a patch selection integrated with Grounding DINO [20], and the Noise Suppression Module (NSM) that distinguishes between positive and negative class samples using density maps. It employs a Contrastive Loss function to refine the precision in identifying target class objects from others in an image." + ], + "image_footnote": [], + "bbox": [ + 222, + 146, + 782, + 300 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "2.3 Vision-Language Pretaining Model", + "text_level": 1, + "bbox": [ + 215, + 440, + 549, + 455 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In recent years, Vision-Language Pretaining (VLP) methods have proven pivotal in enhancing scene understanding and representation learning capabilities. Their adaptability makes them applicable across a wide range of downstream tasks [2,5-7,9,18,27,37,43]. CLIP [30] segregates vision and language features, aligning them through contrastive learning. BLIP [17] introduces a multimodal mixture of encoders and decoders to align different modalities. Building upon this, BLIP2 [16] combines specialized vision and language models to enhance multimodal understanding capabilities through bootstrapping. Grounding DINO [20] incorporates language into close-set detection, improving generalization for open-set detection. The Segment Anything Model (SAM) [15] is based on a prompt-based segmentation task, allowing flexible prompts for zero-shot capabilities across diverse tasks. VLP models, known for their robust multimodal comprehension and scene understanding, significantly advance deep learning and facilitate learning of unknown classes.", + "bbox": [ + 212, + 467, + 787, + 679 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 Proposed Method", + "text_level": 1, + "bbox": [ + 215, + 703, + 426, + 720 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Formula Definition", + "text_level": 1, + "bbox": [ + 215, + 737, + 418, + 750 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As shown in Fig. 2, we introduce a Visual Association-based Zero-shot Object Counting framework (VA-Count) focusing on zero-shot, class-agnostic object counting. The categories among the training set $C_{\\mathrm{train}}$ , validation set $C_{\\mathrm{val}}$ , and testing set $C_{\\mathrm{test}}$ are distinguished, ensuring no overlap among them ( $C_{\\mathrm{train}} \\cap C_{\\mathrm{val}} \\cap C_{\\mathrm{test}} = \\emptyset$ ). VA-Count generates density maps $D$ from input images $I$ for", + "bbox": [ + 212, + 763, + 787, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 732, + 128 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "code", + "sub_type": "algorithm", + "code_caption": [ + "Algorithm 1 Grounding DINO-Guided Exemplar Enhancement Module" + ], + "code_body": "1: I: Input image \n2: $T^p$ : Positive text label (\\{specific class\\}), $T^n$ : Negative text label (\"object\") \n3: $B^p$ : Bounding boxes for positive samples, $S^p$ : Logits for positive samples \n4: $B^n$ : Bounding boxes for negative samples, $S^n$ : Logits for negative samples \n5: $\\tau_l$ : Logits threshold, $\\tau_{\\mathrm{iou}}$ : IoU threshold \n6: M(\\cdot): Single Object Classifier \n7: Input: I, $T^p$ , $T^n$ \n8: Output: $\\mathcal{O}^p = \\{(B^p, S^p)\\}$ : Positive outputs, $\\mathcal{O}^n = \\{(B^n, S^n)\\}$ : Negative outputs \n9: Grounding DINO Process: \n10: F ← ExtractFeatures(I) \n11: $S^p, B^p \\gets \\text{Detect}(F, T^p)$ , filter by $\\tau_l$ ; and $S^n, B^n \\gets \\text{Detect}(F, T^n)$ , filter by $\\tau_l$ \n12: Dedduplication and Filtering: \n13: Initialize $B_{\\text{filtered}}^n, B_{\\text{new}}^p, B_{\\text{new}}^n$ \n14: for $b^n$ in $B^n$ do ▷ Remove duplicates \n15: if $b^n$ is unique in $B^n$ with IoU < $\\tau_{\\mathrm{iou}}$ then \n16: $B_{\\text{filtered}}^n$ .append $(b^n)$ \n17: end if \n18: end for \n19: for all $b \\in B^p \\cup B_{\\text{filtered}}^n$ do ▷ Single object filter \n20: if $M(b)$ is true then \n21: Add $b$ to the appropriate new set \n22: end if \n23: end for \n24: Update $\\mathcal{O}^p, \\mathcal{O}^n$ with new sets", + "bbox": [ + 215, + 162, + 787, + 502 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "any given class $C$ , and counts objects using these density maps. Specifically, VA-Count utilizes pseudo-exemplars $E^p$ to enhance image-text associations, acting as a bridge to establish robust visual correlations between $E^p$ and the images $I$ . To extract exemplars from images, we propose the use of two key modules: the Exemplar Enhancement Module (EEM) (cf. Sec. 3.2) and the Noise Suppression Module (NSM) (cf. Sec. 3.3).", + "bbox": [ + 212, + 540, + 789, + 632 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To alleviate the noise introduced by objects belonging to other classes on the target objects within images, the EEM and NSM are simultaneously used to obtain positive exemplars $B^{p}$ and negative exemplars $B^{p}$ . The EEM consists of Grounding DINO $G(\\cdot)$ and a filtering module $\\varPhi(\\cdot)$ . There are different filtering modules for positive and negative samples $\\varPhi^{p}(\\cdot)$ and $\\varPhi^{n}(\\cdot)$ respectively. $\\varPhi^{p}(\\cdot)$ is a binary classifier, while $\\varPhi^{n}(\\cdot)$ consists of a binary classifier and a dedduplication module. The two kinds of pseudo-exemplars and images are then fed into the Counter $\\Gamma(\\cdot)$ simultaneously for correlation learning. $\\Gamma(\\cdot)$ comprises an image encoder, correlation module, and decoder. The optimization goal of this paper is as follows, where $\\mu(\\cdot)$ denotes the similarity, and $D^{p}, D^{n}, D^{g}$ represent the density maps for positive, negative, and ground truth respectively:", + "bbox": [ + 212, + 636, + 789, + 803 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nD ^ {p} = \\Gamma \\left(\\Phi^ {p} \\left(G \\left(I, T ^ {p}\\right)\\right)\\right), \\quad D ^ {n} = \\Gamma \\left(\\Phi^ {n} \\left(G \\left(I, T ^ {n}\\right)\\right)\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 823, + 787, + 842 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 127 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\text {O b j e c t i v e} = \\left\\{ \\begin{array}{l} \\max \\mu \\left(D ^ {p}, D ^ {g}\\right), \\\\ \\min \\mu \\left(D ^ {n}, D ^ {g}\\right). \\end{array} \\right. \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 383, + 157, + 787, + 200 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "3.2 Exemplar Enhancement Module", + "text_level": 1, + "bbox": [ + 215, + 217, + 529, + 233 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We introduce an Exemplar Enhancement Module (EEM) for detecting objects within images and refining the detected objects as target exemplars. The workflow of the EEM is outlined in Algorithm 1. The EEM ensures VA-Count's scalability to arbitrary classes by incorporating Vision-Language Pretaining (VLP) models (e.g., Grounding DINO [20]) for potential exemplar discovery, renowned for its efficiency in feature extraction and precision in object localization. Furthermore, the EEM involves meticulously discovering and refining potential exemplars to enhance the quality of positive and negative exemplars for precise object counting.", + "bbox": [ + 212, + 241, + 787, + 364 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Grounding DINO-Guided Box Selection. Given the training set input image $I_{i}$ , accompanied by predefined sets of positive text labels $T_{i}^{p} = \\{C_{i}\\}$ and negative text labels $T_{i}^{n} = \\text{\"object\"}$ , where $C_i$ represents the specified target class for the input image and $T_{i}^{n}$ is fixed as \"object\". These labels correspond to the target objects and the noise objects, respectively. Taking positive exemplar discovery as an example, Grounding DINO assigns logits value $S_{i}^{p} = \\{s_{i,j}\\}_{j=0}^{m}$ to all candidate bounding boxes $B_{i}^{p} = \\{b_{i,j}\\}_{j=0}^{m}$ based on $T_{i}^{p}$ , $m$ denotes the number of candidate boxes within the image. For the $j$ -th box in the $i$ -th image, $s_{i,j}$ represents the likelihood that $b_{i,j}$ belongs to the specified class text $C_i$ . The output of positive candidate boxes $\\mathcal{O}^p$ can be formulated as:", + "bbox": [ + 212, + 364, + 789, + 513 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {O} ^ {p} = \\{G (I _ {i}, T _ {i} ^ {p}) \\} _ {i = 0} ^ {k} = \\{(B _ {i} ^ {p}, \\mathcal {S} _ {i} ^ {p}) \\} _ {i = 0} ^ {k}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 367, + 525, + 787, + 545 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $k$ denotes the number of images in the training set.", + "bbox": [ + 212, + 553, + 632, + 568 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Negative Samples and Dedduplication. To minimize the impact of irrelevant classes on the counting accuracy of the target object, we adopt a filtering method for negative samples. Initially, we obtain all candidate bounding boxes for objects within each image. Similar to Eq. (3), the negative candidate boxes $\\mathcal{O}^n$ without filtering can be formulated as:", + "bbox": [ + 212, + 569, + 787, + 643 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {O} ^ {n} = \\left\\{G \\left(I _ {i}, T _ {i} ^ {n}\\right) \\right\\} _ {i = 0} ^ {k} = \\left\\{\\left(B _ {i} ^ {n}, \\mathcal {S} _ {i} ^ {n}\\right) \\right\\} _ {i = 0} ^ {k}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 361, + 654, + 787, + 674 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where for each image $I_{i}$ , the term $T_{i}^{n} =$ \"object\" is employed to identify and generate all bounding boxes $B^{n}$ within that image. This method guarantees the detection of bounding boxes for all objects present in the image.", + "bbox": [ + 212, + 683, + 787, + 728 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Then, for each image $I_{i}$ , we assess each bounding box $b^{n}$ from the negative candidate boxes $B^n$ , and each $b^{n}$ is evaluated to determine its uniqueness in relation to the boxes within $B^{p}$ . Specifically, a bounding box is deemed unique if its overlap with any box in $B^{p}$ is minimal, based on the Intersection over Union (IoU) threshold $\\tau_{\\mathrm{iou}}$ , which can be formulated as:", + "bbox": [ + 212, + 729, + 787, + 804 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {I o U} \\left(B ^ {p}, B ^ {n}\\right) = \\frac {B ^ {p} \\cap B ^ {n}}{B ^ {p} \\cup B ^ {n}}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 408, + 815, + 787, + 844 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 732, + 130 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 114, + 785, + 126 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $B^p \\cap B^n$ and $B^p \\cup B^n$ denotes the intersection and union between positive $B^p$ and negative $B^n$ boxes. Unique negative boxes $b^n$ are then included in the final set $B_{\\text{filtered}}^n$ of negative exemplars.", + "bbox": [ + 212, + 146, + 782, + 191 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Single Object Exemplar Filtering. While DINO excels at identifying targets for arbitrary classes, each candidate box does not always contain a single object because boxes encompassing multiple objects may carry higher confidence levels than boxes of single objects. To ensure the integrity of the visual connections established with images, it's imperative to select exemplars that exclusively contain a single object. To achieve this, we treat singular discrimination as a binary classification task, using the binary classifier $\\delta(\\cdot)$ to refine candidate bounding boxes, ensuring each exemplar contains a single object.", + "bbox": [ + 212, + 191, + 784, + 313 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As shown in Fig. 3, $\\delta(\\cdot)$ leverages a frozen Clip-vit backbone, integrated with a trainable Feed-Forward Network (FFN) for binary classification tasks. Training data is meticulously curated, consisting of samples of single and multiple objects. The labeled single-object samples are the exemplars in the training sets, and the labeled multi-object samples consist of randomly cropped patches and the entire image. To ensure that the class-agnostic counting is maintained, the training data is split for training and evaluation with disjoint samples, ensuring robust exemplar assessment. The classification results for positive candidate boxes $b^{p} \\in B^{p}$ can be formulated as:", + "bbox": [ + 212, + 313, + 488, + 599 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1d2a5d26f6d67a0f1c228b374e846fa3da98af34ff3c22f4d036d8bd4fce9f35.jpg", + "image_caption": [ + "Fig. 3: Illustration of the single object exemplar filtering with a frozen Clip-vit encoder and a trainable FFN to distinguish single from multiple objects." + ], + "image_footnote": [], + "bbox": [ + 503, + 343, + 777, + 531 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\delta \\left(b ^ {p}\\right) = \\operatorname {F F N} \\left(\\operatorname {C l i p - v i t} \\left(b ^ {p}\\right)\\right), \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 238, + 612, + 488, + 628 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "and the filtered set $B_{\\mathrm{new}}$ contains bounding boxes $b^{p}$ that are conditioned on the classification results, which can be formulated as:", + "bbox": [ + 212, + 641, + 784, + 671 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\nB _ {\\text {n e w}} ^ {p} \\leftarrow B _ {\\text {n e w}} ^ {p} \\cup \\{b | \\delta (b ^ {p}) = 1 \\}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 388, + 684, + 787, + 702 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "where the symbol $\\leftarrow$ signifies the update operation for the set $B_{\\mathrm{new}}^p$ , and the set builder notation $\\{b|\\delta(b^p) = 1\\}$ represents the collection of bounding boxes for which $\\delta(b^p)$ predicts a positive outcome.", + "bbox": [ + 212, + 713, + 784, + 758 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "3.3 Noise Suppression Module", + "text_level": 1, + "bbox": [ + 214, + 782, + 480, + 797 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In the context of the EEM, text-image alignment is redefined as object-image alignment by identifying positive $B^{p}$ and negative $B^{n}$ exemplars. We delves", + "bbox": [ + 212, + 809, + 784, + 839 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 127 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "into generating positive and negative density maps and alleviating the noise introduced by the negative exemplars.", + "bbox": [ + 212, + 146, + 782, + 176 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Initially, for each image $I_{i}$ , we select the top three patches with the highest $S^p$ from the positive candidate boxes $B_{\\mathrm{new}}^p$ as positive exemplars $E^{p} = \\{b_{i}^{p}\\}_{i = 1}^{k}$ and the top three patches with the highest $S^n$ from the negative candidate boxes $B_{\\mathrm{filtered}}^n$ as negative exemplars $E^n = \\{b_i^n\\}_{i = 1}^k$ . Following CounTR [19], we build the Counter $\\Gamma (\\cdot)$ with feature interaction to fuse information from both image encoders. Specifically, we merge encoder outputs by using image features as queries and the linear projections of sample features as keys and values, ensuring dimension consistency with image features, in accordance with the self-similarity principle in counting, which can be formulated as:", + "bbox": [ + 212, + 176, + 787, + 313 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {F} _ {\\text {f u s e}} = \\Gamma_ {\\text {f u s e}} \\left(\\boldsymbol {F} _ {\\text {q u e r y}}, \\boldsymbol {W} ^ {k} \\boldsymbol {F} _ {\\text {k e y}}, \\boldsymbol {W} ^ {v} \\boldsymbol {F} _ {\\text {v a l u e}}\\right) \\in \\mathbb {R} ^ {M \\times D}, \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 321, + 318, + 787, + 335 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "where $\\pmb{F}$ denotes the feature representations, $\\pmb{W}^k$ and $\\pmb{W}^v$ are the learnable weights for keys and values from $\\{E^p,E^n\\}$ , $M$ denotes the number of tokens, $D$ is the feature dimensionality, and $\\mathbb{R}^{M\\times D}$ the space of the feature matrix. The decoder outputs the density heatmap after up-sampling the fused features to the input image's dimensions:", + "bbox": [ + 212, + 340, + 784, + 416 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\nD _ {i} ^ {n} = \\Gamma_ {\\text {d e c o d e}} \\left(\\boldsymbol {F} _ {\\text {f u s e}} ^ {n}\\right), \\quad D _ {i} ^ {p} = \\Gamma_ {\\text {d e c o d e}} \\left(\\boldsymbol {F} _ {\\text {f u s e}} ^ {p}\\right). \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 343, + 422, + 787, + 440 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Contrastive Learning and Loss Functions. The objective of the NSM in VA-Count is to reduce the impact of noise in images on counting performance while ensuring the accuracy of density map predictions. To achieve this, a contrastive loss $\\mathcal{L}_C$ is proposed, using specified class density maps as positive samples and non-specified class density maps as negative samples. This involves maximizing the similarity between positive density maps and the ground-truth density maps and minimizing the similarity between negative density maps and the ground-truth density maps, as detailed in Eq. (10). To guide density map generation, we use the loss method from CounTR [19].", + "bbox": [ + 212, + 445, + 787, + 580 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The density loss $\\mathcal{L}_D$ is calculated as the mean squared error between each pixel of the density map $D_i^p$ generated for positive samples and the ground-truth density map $D_i^g$ , as shown in Eq. (11). $H$ and $W$ respectively denote the height and width of the density map.", + "bbox": [ + 212, + 580, + 787, + 641 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {C} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\\right) = - \\log \\frac {\\exp \\sin \\left(D ^ {p} , D ^ {g}\\right)}{\\exp \\sin \\left(D ^ {p} , D ^ {g}\\right) + \\exp \\sin \\left(D ^ {n} , D ^ {g}\\right)}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 281, + 647, + 787, + 679 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {D} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}\\right) = \\frac {1}{H W} \\sum \\left\\| D _ {i} ^ {p} - D _ {i} ^ {g} \\right\\| _ {2} ^ {2}, \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 367, + 688, + 787, + 715 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\text {t o t a l}} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\\right) = \\mathcal {L} _ {C} + \\mathcal {L} _ {D}. \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 388, + 729, + 785, + 744 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4 Experimental Result", + "text_level": 1, + "bbox": [ + 214, + 762, + 452, + 779 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Datasets and Implementation Details", + "text_level": 1, + "bbox": [ + 214, + 789, + 570, + 804 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Datasets. FSC-147 [10] dataset is tailored for class-agnostic counting with 6,135 images and 147 classes. Unique for its non-overlapping class subsets, it", + "bbox": [ + 212, + 809, + 785, + 840 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 732, + 130 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "provides class labels and dot annotations for zero-shot counting using textual prompts.", + "bbox": [ + 212, + 146, + 782, + 175 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "CARPK [11] dataset offers a bird's-eye view of 89,777 cars in 1,448 parking lot images, testing the method's cross-dataset transferability and adaptability.", + "bbox": [ + 212, + 176, + 782, + 205 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Evaluation Metrics. Following previous class-agnostic object counting methods [29], the evaluation metrics employed are Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). MAE is widely used to assess model accuracy, while RMSE evaluates model robustness.", + "bbox": [ + 212, + 207, + 784, + 263 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Exemplar Enhancement Module uses Grounding DINO $^7$ for bounding box proposals, setting the threshold $\\tau_{l}$ to 0.02. For negative sample filtering, the IoU threshold $\\tau_{\\mathrm{iou}}$ is set to 0.5. The single object classifier employs CLIP ViT-B/16 $^8$ as its backbone, with an FFN comprising two linear layers, trained over 100 epochs at a learning rate of e-4. The dataset is partitioned in a 7:3 ratio", + "bbox": [ + 212, + 267, + 784, + 339 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Noise Suppression Module follows CounTR's [19] two-stage training: MAE pretraining and AdamW [25]-optimized fine-tuning. It is trained on FSC-147 with a learning rate of $10^{-5}$ , batch size of 8, on an NVIDIA RTX L40 GPU.", + "bbox": [ + 212, + 343, + 784, + 385 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.2 Comparison with the State-of-the-Arts", + "text_level": 1, + "bbox": [ + 214, + 412, + 581, + 428 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "For the performance evaluation of our method, it is benchmarked against a variety of state-of-the-art few-shot and zero-shot counting methods on FSC-147. Additionally, we evaluate our method in comparison with class-specific counting models on CARPK.", + "bbox": [ + 212, + 438, + 784, + 496 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Quantitative Result on FSC-147. We evaluate the effectiveness of VA-Count on FSC-147, comparing it with state-of-the-art counting methods as detailed in Tab. 1. Our method surpasses the exemplar-discovery method ZSC [45], demonstrating that the exemplars found by VA-Count are of higher quality. VA-Count achieves the best performance in MAE and second in RMSE, validating our method's effectiveness. Despite being second in RMSE, it still outperforms ZSC. In comparison with CLIP-Count [13], VA-Count, due to some noise introduction, has a few inferior samples but, overall, surpasses CLIP-Count in performance.", + "bbox": [ + 212, + 500, + 784, + 619 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Quantitative Result on CARPK. In Tab. 2, VA-Count's cross-domain and non-cross-domain performance on CARPK are compared with previous methods. In the zero-shot group, VA-Count achieves the best performance, particularly with its cross-domain performance methoding that of the few-shot group, demonstrating its outstanding transferability. It is worth noting that employing $\\varPhi(\\cdot)$ significantly reduces errors compared to directly using the Grounding DINO [20] method. In the absence of any training data, VA-Count outperforms FamNet [33] in the cross-domain group.", + "bbox": [ + 212, + 619, + 784, + 739 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Ablation Study. We conduct both quantitative and qualitative analyses on the contributions of each component in our proposed VA-Count, which includes the Grounding-DINO candidate box extraction and filtering module. The quantitative outcomes are presented in Tab. 3. Using only Grounding DINO method", + "bbox": [ + 212, + 741, + 782, + 800 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 127 + ], + "page_idx": 9 + }, + { + "type": "page_footnote", + "text": "7 https://github.com/IDEA-Research/GroundingDINO", + "bbox": [ + 217, + 810, + 589, + 824 + ], + "page_idx": 9 + }, + { + "type": "page_footnote", + "text": "8 https://github.com/openai/CLIP", + "bbox": [ + 218, + 824, + 455, + 839 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/bc800a4b540758ef1feb6691023a357ec2f832914e186e0cfa16f7c02cd017e8.jpg", + "table_caption": [ + "Table 1: Quantitative results of our VA-Count and other state-of-the-art competitors on FSC-147. F-S, R-F, and Z-S are abbreviated for Few-shot, Reference-free, and Zero-shot settings. Best results for each scheme and the second-best results at the zero-shot setting are highlighted in bold and underline." + ], + "table_footnote": [], + "table_body": "
SchemeMethodVenueShotVal SetTest SetAvg
MAERMSEMAERMSEMAERMSE
F-SFamNet [33]CVPR'21324.3270.9422.56101.5423.4486.24
CFOCNet [46]WACV'21321.1961.4122.10112.7121.6587.06
CounTR [19]BMVC'22313.1349.8311.9591.2312.5470.53
LOCA [41]ICCV'23310.2432.5610.9756.9710.6144.77
SAM [36]WACV'243--19.95132.1619.95132.16
PseCo [12]CVPR'24315.3168.3413.05112.8614.1890.60
CACViT [42]AAAI'24310.6337.959.1348.969.8843.46
FamNet [33]CVPR'21126.0577.0126.76110.9526.4193.98
R-FFamNet [33]CVPR'21032.1598.7532.27131.4632.21115.11
RepRPN-C [32]ACCV'22029.2498.1126.66129.1127.95113.61
CounTR [19]BMVC'22018.0771.8414.71106.8716.3989.36
RCC [10]CVPR'23017.4958.8117.12104.5317.3181.67
LOCA [41]ICCV'23017.4354.9616.22103.9616.8379.46
Z-SZSC [45]CVPR'23026.9388.6322.09115.1724.51101.90
CLIP-Count [13]MM'23018.7961.1817.78106.6218.28583.90
PseCo [12]CVPR'24023.90100.3316.58129.7720.24115.05
VA-CountOurs017.8773.2217.88129.3117.87101.26
", + "bbox": [ + 217, + 212, + 785, + 518 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "(first row) achieves an error of 52.82 without training, which, although not as accurate as regression-based methods, ensures the detection of relevant objects. Performance improves slightly after adding a single-object classification filter (second row). With training based on $\\mathcal{L}_D$ , it already meets counting requirements. In Tab. 2, we compare using Grounding DINO alone and with a single-object classification filter on CARPK (last three rows). Our binary classifier significantly improves performance, reducing MAE and RMSE by about 10.", + "bbox": [ + 215, + 547, + 787, + 654 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.3 Qualitative Analysis", + "text_level": 1, + "bbox": [ + 217, + 676, + 428, + 691 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Analysis of the zero-shot performance. To further ensure the effectiveness of the proposed VA-Count framework, we visualize qualitative results in Fig. 4. We provide a side-by-side comparison of the proposed VA-Count against the few-shot counting method [19]. VA-Count achieves a remarkable resemblance to the ground truth, showcasing the method's nuanced understanding of object boundaries and densities and being less affected by the background noise. Specifically, the first row shows there exists a golden egg drowned by white eggs. The few-shot method struggled with this nuanced differentiation, failing to recognize the golden egg distinctly. In the second row, strawberries near flowers also confound the few-shot", + "bbox": [ + 215, + 704, + 787, + 839 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 400, + 114, + 730, + 128 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 116, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/1fcf8d1de462b532980f2f158bcfeed57a47bc7acb0fe35ab00b3046f4f87284.jpg", + "table_caption": [ + "Table 2: Quantitative results of our VA-Count and other state-of-the-art competitors on CARPK. $\\varPhi(\\cdot)$ denotes the single-object classification filter. C and F denote CARPK and FSC-147, respectively." + ], + "table_footnote": [], + "table_body": "
MethodsVenueShotC → CF → C
MAERMSEMAERMSE
FamNet [33]CVPR'21318.1933.6628.8444.47
GMN [26]CVPR'2137.489.90--
BMNet+ [35]CVPR'2235.767.8310.4413.77
CounTR [19]BMVC'2235.757.45--
RCC [10]CVPR'2309.2111.3321.3826.61
CLIP-Count [13]MM'230--11.9616.61
Grounding DINO [20]arXiv'24029.7231.6029.7231.60
Grounding DINO + Φ(·)Ours018.5421.7118.5421.71
VA-CountOurs08.7510.3010.6313.20
", + "bbox": [ + 217, + 198, + 781, + 380 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/85e4542d8aae4b32c2a561826655ca94e7b491d7244e2d3da03e6fab81696126.jpg", + "table_caption": [ + "Table 3: Ablation study on each component's contribution to the final results on FSC-147. We demonstrate the effectiveness of two parts of our framework and two types of loss: $G(\\cdot)$ for Grounding DINO, $\\varPhi(\\cdot)$ for the single-object filtering section, the density loss $\\mathcal{L}_D$ , and the contrastive loss $\\mathcal{L}_C$ ." + ], + "table_footnote": [], + "table_body": "
G(·)φ(·)LDLCVal SetTest Set
MAERMSEMAERMSE
52.82134.4954.48159.30
52.12135.2954.27159.76
19.6373.9418.93116.65
17.8773.2217.88129.31
", + "bbox": [ + 233, + 462, + 764, + 569 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "method. These examples emphasize VA-Count's superior ability to identify and differentiate between objects with minor differences. The third row presents a challenging scenario with dense keys partially occluded by hands. This situation tests the model's ability to count tiny, closely situated objects under partial occlusion, showcasing VA-Count's advanced capability to accurately identify and count such challenging objects, which is significantly better than the few-shot method. These results highlight the impact of exemplar selection and the incorporation of negative patches in VA-Count, significantly enhancing its object counting and localization capabilities, and showcasing its innovation in zero-shot object counting.", + "bbox": [ + 212, + 597, + 787, + 748 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Analysis of Positive and Negative Exemplars. To make our experiment more straightforward, we also conduct a qualitative analysis of the patch selection. As shown in Fig. 5 and Fig. 6, we illustrate selected positive and negative patches for various categories under a zero-shot setting. Taking a closer look at the positive patches for categories such as crab cakes and green peas, the results show a high degree of accuracy in the model's ability to isolate and highlight the regions", + "bbox": [ + 212, + 750, + 787, + 840 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 127 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/1c22ff0b32e2acf775447d31e4fee0243f2bb657543e369a64bc9e81e7b23d7f.jpg", + "image_caption": [ + "Fig. 4: Illustration of heatmaps compared with few-shot method [19] on FSC-147. Predicted density map is overlaid on the original RGB image. (Best viewed in zoom in)" + ], + "image_footnote": [], + "bbox": [ + 222, + 148, + 782, + 330 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/0d246025080979b318b5e0ba1f9fec8f92f20ee187876a5bfe402eea4ee12e6f.jpg", + "image_caption": [ + "Fig. 5: Illustration of the positive (Pos.) and negative (Neg.) exemplars on FSC-147." + ], + "image_footnote": [], + "bbox": [ + 222, + 400, + 782, + 592 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "containing the target objects. This precision underscores the effectiveness of VA-Count framework in discerning relevant features amidst complex backgrounds, affirming its robustness in the exemplar discovery. Negative patches, especially from categories like strawberries and crab cakes, highlight the model's challenges with visually similar or overlapping areas not in the target category, underscoring the need for improved discriminative abilities. This analysis underscores our paper's impact on zero-shot object counting and the importance of refining visual learning and exemplar selection for future advancements.", + "bbox": [ + 212, + 655, + 787, + 776 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Effective of the object exemplar filter. The effectiveness of the object exemplar filter is further evaluated by comparing visualization grounding results with and without the filter. Fig. 7 illustrates this comparison for the category of cars on CARPK. Images without the filter show multiple cars within a single", + "bbox": [ + 212, + 779, + 789, + 840 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 732, + 128 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/334ca83d047f24e24263af78e29ddb2f2bdca53ef285741f6b906daddddca24f.jpg", + "image_caption": [ + "Pos." + ], + "image_footnote": [], + "bbox": [ + 218, + 146, + 356, + 220 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/64e0ddca09e4fc9b8f2d65940b5aa0dee89f59057a1abdfa9c71184970ca651e.jpg", + "image_caption": [ + "Fig. 6: Illustration of the final positive (Pos.) and negative (Neg.) exemplars for images on CARPK." + ], + "image_footnote": [], + "bbox": [ + 382, + 156, + 496, + 222 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/b9507ee3d3fee208ef7dbd4e764d2797cea354905eb16a6fa4668b87b7d3320c.jpg", + "image_caption": [ + "Pos." + ], + "image_footnote": [], + "bbox": [ + 500, + 146, + 643, + 220 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/764fe6604ec99ed48edb1936e1211935050102981caf9145a9ffbaf6ef4139d3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 669, + 156, + 782, + 220 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/9eba14f7ed8cb60de40d8c79a2ece9b8e62a41c3f0bca7c9f06b48782abf4ed4.jpg", + "image_caption": [ + "Fig. 7: Illustration of candidate boxes before and after exemplar filter for images on CARPK." + ], + "image_footnote": [], + "bbox": [ + 222, + 284, + 408, + 391 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/ef39bb910e527418d63f3ccf849fa6cfb8c1d02b9a605299bdd35f823d0adf06.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 410, + 284, + 594, + 388 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/6ea60153dbf21e8f0398f553ecb771c32942188b4fd53e3ba739bb3726f61544.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 596, + 308, + 781, + 388 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "bounding box, indicating Grounding DINO's [20] inability to isolate individual objects effectively. Conversely, images with the filter applied demonstrate a significant improvement, with bounding boxes accurately encompassing single cars. This clear distinction highlights the binary classifier's crucial role in ensuring precise object counting by enforcing the single-object criterion within each exemplar, validating the filter's contribution to enhancing the model's accuracy and reliability in VA-Count framework.", + "bbox": [ + 212, + 464, + 787, + 571 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 215, + 603, + 359, + 619 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "This paper addresses the challenges in class-agnostic object counting by introducing the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count effectively balances the need for scalability across arbitrary classes with the establishment of robust visual connections, overcoming the limitations of existing Zero-shot Object Counting (ZOC) methods. VA-Count comprises an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively. The EEM utilizes advanced Vision-Language Pre-taining models like Grounding DINO for scalable exemplar discovery, while the NSM mitigates the impact of erroneous exemplars through contrastive learning. VA-Count shows promise in zero-shot counting, performing well on three datasets and offering precise visual associations and scalability. In the future, we will explore and better utilize advanced visual language models.", + "bbox": [ + 212, + 643, + 789, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 126 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 215, + 143, + 392, + 162 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "This work was supported in part by the National Natural Science Foundation of China under Grant 62271361, the Sanya Yazhou Bay Science and Technology City Administration scientific research project under Grant 2022KF0021, the Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2023B1515020097, and the National Research Foundation Singapore under the AI Singapore Programme under Grant AISG3-GV-2023-011.", + "bbox": [ + 212, + 176, + 787, + 268 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 215, + 292, + 323, + 308 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Arteta, C., Lempitsky, V.S., Zisserman, A.: Counting in the wild. In: Proc. Eur. Conf. Comput. Vis. pp. 483-498 (2016)", + "2. Bai, Y., Cao, M., Gao, D., Cao, Z., Chen, C., Fan, Z., Nie, L., Zhang, M.: RaSa: Relation and sensitivity aware representation learning for text-based person search. In: Proc. Int. Joint Conf. Artif. Intell. pp. 555-563 (2023)", + "3. Bansal, A., Sikka, K., Sharma, G., Chellappa, R., Divakaran, A.: Zero-shot object detection. In: Proc. Eur. Conf. Comput. Vis. pp. 397-414 (2018)", + "4. Chai, L., Liu, Y., Liu, W., Han, G., He, S.: CrowdGAN: Identity-free interactive crowd video generation and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 44(6), 2856-2871 (2022)", + "5. Chen, C., Ye, M., Jiang, D.: Towards modality-agnostic person re-identification with descriptive query. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15128-15137 (2023)", + "6. Dou, Z., Kamath, A., Gan, Z., Zhang, P., Wang, J., Li, L., Liu, Z., Liu, C., LeCun, Y., Peng, N., Gao, J., Wang, L.: Coarse-to-fine vision-language pre-training with fusion in the backbone. In: Adv. Neural Inf. Process. Syst. pp. 32942-32956 (2022)", + "7. Du, Y., Wei, F., Zhang, Z., Shi, M., Gao, Y., Li, G.: Learning to prompt for open-vocabulary object detection with vision-language model. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 14084-14093 (2022)", + "8. Gong, S., Zhang, S., Yang, J., Dai, D., Schiele, B.: Class-agnostic object counting robust to intraclass diversity. In: Proc. Eur. Conf. Comput. Vis. pp. 388-403 (2022)", + "9. He, S., Chen, W., Wang, K., Luo, H., Wang, F., Jiang, W., Ding, H.: Region generation and assessment network for occluded person re-identification. IEEE Trans. Inf. Forensics Secur. 19, 120–132 (2023)", + "0. Hobley, M., Prisacariu, V.: Learning to count anything: Reference-less class-agnostic counting with weak supervision. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (2023)", + "1. Hsieh, M., Lin, Y., Hsu, W.H.: Drone-based object counting by spatially regularized regional proposal network. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 4165-4173 (2017)", + "2. Huang, Z., Dai, M., Zhang, Y., Zhang, J., Shan, H.: Point, segment and count: A generalized framework for object counting. arXiv:2311.12386 (2023)", + "3. Jiang, R., Liu, L., Chen, C.: CLIP-Count: Towards text-guided zero-shot object counting. In: Proc. ACM Multimedia. pp. 4535-4545 (2023)", + "4. Kang, S., Moon, W., Kim, E., Heo, J.: VLCounter: Text-aware visual representation for zero-shot object counting. In: Proc. AAAI Conf. Artif. Intell. pp. 2714-2722 (2024)" + ], + "bbox": [ + 225, + 324, + 785, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 730, + 128 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "15. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W., Dollár, P., Girshick, R.B.: Segment anything. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 3992-4003 (2023)", + "16. Li, J., Li, D., Savarese, S., Hoi, S.C.H.: BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In: Proc. Int. Conf. Mach. Learn. pp. 19730-19742 (2023)", + "17. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Proc. Int. Conf. Mach. Learn. pp. 12888-12900 (2022)", + "18. Li, S., Sun, L., Li, Q.: CLIP-ReID: Exploiting vision-language model for image re-identification without concrete text labels. In: Proc. AAAI Conf. Artif. Intell. pp. 1405-1413 (2023)", + "19. Liu, C., Zhong, Y., Zisserman, A., Xie, W.: CounTR: Transformer-based generalised visual counting. In: Proc. Brit. Mach. Vis. Conf. p. 370 (2022)", + "20. Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., Zhang, L.: Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection. arXiv:2303.05499 (2023)", + "21. Liu, X., Yang, J., Ding, W., Wang, T., Wang, Z., Xiong, J.: Adaptive mixture regression network with local counting map for crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 241-257 (2020)", + "22. Liu, Y., Ren, S., Chai, L., Wu, H., Xu, D., Qin, J., He, S.: Reducing spatial labeling redundancy for active semi-supervised crowd counting. IEEE Trans. Pattern Anal. Mach. Intell. 45(7), 9248-9255 (2023)", + "23. Liu, Y., Wen, Q., Chen, H., Liu, W., Qin, J., Han, G., He, S.: Crowd counting via cross-stage refinement networks. IEEE Trans. Image Process. 29, 6800-6812 (2020)", + "24. Liu, Y., Xu, D., Ren, S., Wu, H., Cai, H., He, S.: Fine-grained domain adaptive crowd counting via point-derived segmentation. In: Proc. IEEE Int. Conf. Multimedia Expo. pp. 2363-2368 (2023)", + "25. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proc. Int. Conf. Learn. Represent. (2019)", + "26. Lu, E., Xie, W., Zisserman, A.: Class-agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 669-684 (2019)", + "27. Ming, Y., Cai, Z., Gu, J., Sun, Y., Li, W., Li, Y.: Delving into out-of-distribution detection with vision-language representations. In: Adv. Neural Inf. Process. Syst. pp. 35087-35102 (2022)", + "28. Mundhenk, T.N., Konjevod, G., Sakla, W.A., Boakye, K.: A large contextual dataset for classification, detection and counting of cars with deep learning. In: Proc. Eur. Conf. Comput. Vis. pp. 785-800 (2016)", + "29. Nguyen, T., Pham, C., Nguyen, K., Hoai, M.: Few-shot object counting and detection. In: Proc. Eur. Conf. Comput. Vis. pp. 348-365 (2022)", + "30. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Proc. Int. Conf. Mach. Learn. pp. 8748-8763 (2021)", + "31. Ranjan, V., Le, H.M., Hoai, M.: Iterative crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 278-293 (2018)", + "32. Ranjan, V., Nguyen, M.H.: Exemplar free class agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 71-87 (2022)", + "33. Ranjan, V., Sharma, U., Nguyen, T., Hoai, M.: Learning to count everything. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 3394-3403 (2021)" + ], + "bbox": [ + 215, + 146, + 787, + 840 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "H. Zhu et al.", + "bbox": [ + 271, + 114, + 359, + 126 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "34. Sam, D.B., Agarwalla, A., Joseph, J., Sindagi, V.A., Babu, R.V., Patel, V.M.: Completely self-supervised crowd counting via distribution matching. In: Proc. Eur. Conf. Comput. Vis. pp. 186-204 (2022)", + "35. Shi, M., Lu, H., Feng, C., Liu, C., Cao, Z.: Represent, compare, and learn: A similarity-aware framework for class-agnostic counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 9529–9538 (2022)", + "36. Shi, Z., Sun, Y., Zhang, M.: Training-free object counting with prompts. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 323-331 (2024)", + "37. Song, S., Wan, J., Yang, Z., Tang, J., Cheng, W., Bai, X., Yao, C.: Vision-language pre-training for boosting scene text detectors. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15681-15691 (2022)", + "38. Sun, G., An, Z., Liu, Y., Liu, C., Sakaridis, C., Fan, D., Van Gool, L.: Indiscernible object counting in underwater scenes. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 13791-13801 (2023)", + "39. Tian, C., Zhang, X., Liang, X., Li, B., Sun, Y., Zhang, S.: Knowledge distillation with fast CNN for license plate detection. IEEE Trans. Intell. Transp. Syst. (2023)", + "40. Tyagi, A.K., Mohapatra, C., Das, P., Makharia, G., Mehra, L., AP, P., Mausam: DeGPR: Deep guided posterior regularization for multi-class cell detection and counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 23913-23923 (2023)", + "41. Dukic, N., Lukezic, A., Zavrtanik, V., Kristan, M.: A low-shot object counting network with iterative prototype adaptation. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 18872-18881 (2023)", + "42. Wang, Z., Xiao, L., Cao, Z., Lu, H.: Vision transformer off-the-shelf: A surprising baseline for few-shot class-agnostic counting. In: Proc. AAAI Conf. Artif. Intell. pp. 5832-5840 (2024)", + "43. Xie, D., Liu, L., Zhang, S., Tian, J.: A unified multi-modal structure for retrieving tracked vehicles through natural language descriptions. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops. pp. 5418-5426 (2023)", + "44. Xiong, Z., Chai, L., Liu, W., Liu, Y., Ren, S., He, S.: Glance to count: Learning to rank with anchors for weakly-supervised crowd counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 342-351 (2024)", + "45. Xu, J., Le, H., Nguyen, V., Ranjan, V., Samaras, D.: Zero-shot object counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15548-15557 (2023)", + "46. Yang, S., Su, H., Hsu, W.H., Chen, W.: Class-agnostic few-shot object counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 869-877 (2021)", + "47. You, Z., Yang, K., Luo, W., Lu, X., Cui, L., Le, X.: Few-shot object counting with similarity-aware feature enhancement. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 6304-6313 (2023)", + "48. Zhang, Z., Liu, K., Gao, F., Li, X., Wang, G.: Vision-based vehicle detecting and counting for traffic flow analysis. In: Proc. IEEE Int. Joint Conf. Neural Networks. pp. 2267-2273 (2016)", + "49. Zheng, Y., Wu, J., Qin, Y., Zhang, F., Cui, L.: Zero-shot instance segmentation. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 2593-2602 (2021)", + "50. Zhu, H., Yuan, J., Zhong, X., Liao, L., Wang, Z.: Find gold in sand: Fine-grained similarity mining for domain-adaptive crowd counting. IEEE Trans. Multimedia 26, 3842-3855 (2024)", + "51. Zhu, H., Yuan, J., Zhong, X., Yang, Z., Wang, Z., He, S.: DAOT: Domain-agnostically aligned optimal transport for domain-adaptive crowd counting. In: Proc. ACM Multimedia. pp. 4319-4329 (2023)" + ], + "bbox": [ + 212, + 146, + 787, + 829 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Zero-shot Object Counting with Good Exemplars", + "bbox": [ + 398, + 114, + 730, + 128 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 16 + } +] \ No newline at end of file diff --git a/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_model.json b/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7ac3f421c58f3b812941af8774bc7325971c201d --- /dev/null +++ b/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_model.json @@ -0,0 +1,2390 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.218, + 0.142, + 0.784, + 0.164 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.189, + 0.764, + 0.221 + ], + "angle": 0, + "content": "Huilin Zhu\\(^{1,2,3,\\dagger}\\), Jingling Yuan\\(^{1,2,\\dagger}\\), Zhengwei Yang\\(^{4,\\dagger}\\), Yu Guo\\(^{3,5}\\), Zheng Wang\\(^{4}\\), Xian Zhong\\(^{1,2,6(\\text{四})}\\), and Shengfeng He\\(^{3(\\text{四})}\\)" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.232, + 0.775, + 0.248 + ], + "angle": 0, + "content": "1 Sanya Science and Education Innovation Park, Wuhan University of Technology" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.248, + 0.777, + 0.275 + ], + "angle": 0, + "content": "2 Hubei Key Laboratory of Transportation Internet of Things, School of Computer Science and Artificial Intelligence, Wuhan University of Technology" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.232, + 0.777, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.429, + 0.276, + 0.574, + 0.288 + ], + "angle": 0, + "content": "zhongx@whut.edu.cn" + }, + { + "type": "text", + "bbox": [ + 0.22, + 0.288, + 0.782, + 0.316 + ], + "angle": 0, + "content": "3 School of Computing and Information Systems, Singapore Management University shengfenghe@smu.edu.sg" + }, + { + "type": "text", + "bbox": [ + 0.336, + 0.316, + 0.666, + 0.33 + ], + "angle": 0, + "content": "\\(^{4}\\) School of Computer Science, Wuhan University" + }, + { + "type": "text", + "bbox": [ + 0.314, + 0.33, + 0.69, + 0.344 + ], + "angle": 0, + "content": "5 School of Navigation, Wuhan University of Technology" + }, + { + "type": "text", + "bbox": [ + 0.336, + 0.344, + 0.668, + 0.358 + ], + "angle": 0, + "content": "\\(^{6}\\) ROSE@EEE, Nanyang Technological University" + }, + { + "type": "list", + "bbox": [ + 0.314, + 0.316, + 0.69, + 0.358 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.358, + 0.573, + 0.371 + ], + "angle": 0, + "content": "Equal Contribution" + }, + { + "type": "text", + "bbox": [ + 0.357, + 0.372, + 0.645, + 0.385 + ], + "angle": 0, + "content": "https://github.com/HopooLinZ/VA-Count" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.417, + 0.744, + 0.668 + ], + "angle": 0, + "content": "Abstract. Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations. However, a critical challenge in current ZOC methods lies in their inability to identify high-quality exemplars effectively. This deficiency hampers scalability across diverse classes and undermines the development of strong visual associations between the identified classes and image content. To this end, we propose the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count consists of an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM) that synergistically refine the process of class exemplar identification while minimizing the consequences of incorrect object identification. The EEM utilizes advanced vision-language pre-taining models to discover potential exemplars, ensuring the framework's adaptability to various classes. Meanwhile, the NSM employs contrastive learning to differentiate between optimal and suboptimal exemplar pairs, reducing the negative effects of erroneous exemplars. VA-Count demonstrates its effectiveness and scalability in zero-shot contexts with superior performance on two object counting datasets." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.69, + 0.376, + 0.705 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.841 + ], + "angle": 0, + "content": "In visual monitoring applications, object counting plays a critical role in analyzing images or videos. Traditional methods focus on high precision within predefined object categories, such as crowds [4, 23], vehicles, and cells [1, 34, 39, 40, 44]. Yet, these methods are limited to specific categories, lacking the flexibility to adapt to new, unseen classes. To address these challenges, class-agnostic methods have been developed for scenarios with unseen classes. These methods, including few-shot, reference-free, and zero-shot object counting [12, 32, 35, 46, 47], provide varying levels of independence from predefined object classes." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.127 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.147, + 0.775, + 0.371 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.383, + 0.79, + 0.482 + ], + "angle": 0, + "content": "Fig. 1: Illustration of class-agnostic object counting methods. (a) Few-shot uses limited annotations for counting. (b) Reference-free quantifies objects without annotations. (c) Zero-shot counts specific classes without annotations, further divided into: (c1) Image-text association, leveraging direct image-text correlations. (c2) Class-related exemplar search, using prototypes to link classes with images. (c3) Our method introduces a detection-driven exemplar discovery to harmonize text with visual representations, distinguishing it from prior methods." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.519, + 0.789, + 0.718 + ], + "angle": 0, + "content": "In this context, different strategies are adopted for object counting under varying constraints, as illustrated in Fig. 1. Few-shot counting methods [29,46,47], depicted in Fig. 1(a), method the task as a matching problem, using a small number of annotated bounding boxes to identify and count objects throughout the image. While effective, this method requires fine-tuning with annotations from novel classes, limiting its scalability in real-world surveillance settings due to the sparse availability of annotated bounding boxes. To circumvent the limitations of bounding box annotations, reference-free counting methods are developed [10,19,32,41], as shown in Fig. 1(b). These methods aim to ascertain the total number of objects in an image without relying on specific cues. Nevertheless, the lack of specificity in counting categories makes these methods prone to errors induced by background noise, as they indiscriminately count all visible objects, leading to a lack of control in the counting process." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.789, + 0.842 + ], + "angle": 0, + "content": "In pursuit of more scalable and realistic counting solutions, zero-shot methods [3, 45, 49], illustrated in Fig. 1(c), are introduced. These techniques are designed to count objects from specified classes within an image without prior annotations for those classes, addressing the limitations of both few-shot and reference-free methods by providing enhanced specificity and scalability. These methods can be categorized into two streams. The initial method [13, 14] leans on image-text alignment to comprehend object-related correlations without needing physical exemplars. This method enhances scalability for unidentified classes but" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.733, + 0.131 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.787, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.283 + ], + "angle": 0, + "content": "struggles with adequately representing image details for target classes, especially those with atypical shapes, as demonstrated in Fig. 1(c1). Conversely, the second method [45] concentrates on identifying objects through the discovery of class-relevant exemplars. This is achieved by creating pseudo labels that assess the resemblance between image patches and class-generated prototypes. Nevertheless, this method's reliance on arbitrary patch selection hampers its ability to accurately outline entire objects. Additionally, the absence of direct text-image engagement restricts its scalability, tethered to the pre-defined categories present in the training dataset, as illustrated in Fig. 1(c2)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.283, + 0.789, + 0.465 + ], + "angle": 0, + "content": "As shown in Fig. 1(c3), we introduce the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count aims to create a robust link between specific object categories and their corresponding visual representations, ensuring adaptability to various classes. This framework is anchored by three core principles. First, it prioritizes flexibility and scalability, enabling adaptation to novel classes beyond its initial parameters. Second, it enhances precision in identifying exemplary objects, strengthening the connection between visual depictions and their categories. Third, it devises strategies to reduce the effects of localization errors on counting precision. Building on these principles, VA-Count integrates an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.465, + 0.789, + 0.617 + ], + "angle": 0, + "content": "In detail, the EEM expands VA-Count's capacity to handle various classes through the integration of Vision-Language Pretaining (VLP) models, such as Grounding DINO [20]. These VLP models, trained on extensive datasets, excel in identifying a wide range of classes by defining specific categories. In the context of ZOC, it is essential to select exemplars that each contain precisely one object from among the potential bounding boxes that might encompass varying object quantities. To this end, we deploy a binary filter aimed at rigorously refining the set of candidate exemplars, excluding those that fail to comply with the single-object requirement. This filtration step is pivotal for ensuring the precision and consistency necessary for ZOC." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.617, + 0.789, + 0.782 + ], + "angle": 0, + "content": "Moreover, even when potential exemplars accurately represent single objects, the unintentional inclusion of exemplars not pertaining to the target category poses a persistent problem. This misalignment introduces uncertainty into the learning process that associates exemplars with images. To counteract this issue, the NSM module operates as a safeguard by identifying negative exemplars, which are unrelated to the intended category. Contrasting with the EEM, which focuses on selecting ideal samples to foster visual connections with images, the NSM employs samples from irrelevant classes to build these associations, utilizing contrastive learning to differentiate between them. This method of contrastive learning acts as a rectifying mechanism, markedly improving the accuracy and efficiency of the associative learning framework." + }, + { + "type": "text", + "bbox": [ + 0.24, + 0.784, + 0.564, + 0.798 + ], + "angle": 0, + "content": "In summary, our contributions are threefold:" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.81, + 0.789, + 0.842 + ], + "angle": 0, + "content": "- We introduce a Visual Association-based Zero-shot Object Counting framework, which facilitates high-quality exemplar identification for any class" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.128 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "text", + "bbox": [ + 0.241, + 0.147, + 0.785, + 0.177 + ], + "angle": 0, + "content": "without needing annotated examples and forges robust visual connections between objects and images." + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.178, + 0.787, + 0.237 + ], + "angle": 0, + "content": "- We propose an exemplar enhancement model leveraging the universal class-agnostic detection capabilities of the Vision-Language Pretaining model for precise exemplar selection, and a Noise Suppression Module to minimize the adverse effects of incorrect samples in visual associative learning." + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.239, + 0.787, + 0.283 + ], + "angle": 0, + "content": "- Extensive experiments conducted on two object counting datasets demonstrate the state-of-the-art accuracy and generalizability of VA-Count, underscoring its notable scalability." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.308, + 0.388, + 0.323 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.341, + 0.519, + 0.357 + ], + "angle": 0, + "content": "2.1 Class-Specific Object Counting" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.367, + 0.788, + 0.504 + ], + "angle": 0, + "content": "Object counting plays a crucial role in public safety, public administration, and the liberation of human labor. Currently, class-specific object counting [22,32, 35,46,47] is the predominant method, which entails identifying specific object categories (such as humans [21,24,31,50,51], vehicles [28,48], fishes [38], cells [40], etc.) leveraging object detection or density estimation and counting accordingly. While these methods show excellence within close-set scenarios with a fixed number of categories, transferring them to arbitrary categories poses challenges. Introducing novel categories necessitates retraining or fine-tuning a counting model with new data, which limits their applicability in real scenarios." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.527, + 0.528, + 0.543 + ], + "angle": 0, + "content": "2.2 Class-Agnostic Object Counting" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.553, + 0.789, + 0.841 + ], + "angle": 0, + "content": "Class-agnostic object counting [8, 26, 29, 36, 42] is proposed for scenarios with less data, which can be divided into few-shot and zero-shot depending on the annotation usage. Specifically, GMN [26] initially frames the class-agnostic counting task as a matching task, leading to FamNet [33], which implements ROI Pooling for broad applicability across FSC-147. As multi-class datasets emerged, the focus shifts towards few-shot methods, where LOCA [41] enhances feature representation and exemplar adaptation; and CounTR [19] utilizes transformers for scalable counting with a two-stage training model. BMNet [?] innovates with a bilinear matching network for refined object similarity assessments. In the realm of zero-shot methods, which are categorized into two types, methods like ZSC [45] leverage textual inputs to generate prototypes and filter image patches, thus reducing the need for extensive labeling, albeit with fixed generators that limit scalability. CLIP-Count [13] employs CLIP to encode text and images separately, establishing semantic associations crucial for intuitive counting. VL-Count [14] takes this further by enhancing CLIP's text-image association learning specifically for object counting. Additionally, PseCo [12] introduces a SAM-based multi-task framework that achieves segmentation, dot mapping, and detection on counting data, offering broad application prospects but also necessitating greater computational resources." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.147, + 0.784, + 0.301 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.327, + 0.788, + 0.412 + ], + "angle": 0, + "content": "Fig. 2: Overview of the proposed method. Proposed method focuses on two main elements: the Exemplar Enhancement Module (EEM) for improving exemplar quality through a patch selection integrated with Grounding DINO [20], and the Noise Suppression Module (NSM) that distinguishes between positive and negative class samples using density maps. It employs a Contrastive Loss function to refine the precision in identifying target class objects from others in an image." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.441, + 0.55, + 0.456 + ], + "angle": 0, + "content": "2.3 Vision-Language Pretaining Model" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.468, + 0.789, + 0.68 + ], + "angle": 0, + "content": "In recent years, Vision-Language Pretaining (VLP) methods have proven pivotal in enhancing scene understanding and representation learning capabilities. Their adaptability makes them applicable across a wide range of downstream tasks [2,5-7,9,18,27,37,43]. CLIP [30] segregates vision and language features, aligning them through contrastive learning. BLIP [17] introduces a multimodal mixture of encoders and decoders to align different modalities. Building upon this, BLIP2 [16] combines specialized vision and language models to enhance multimodal understanding capabilities through bootstrapping. Grounding DINO [20] incorporates language into close-set detection, improving generalization for open-set detection. The Segment Anything Model (SAM) [15] is based on a prompt-based segmentation task, allowing flexible prompts for zero-shot capabilities across diverse tasks. VLP models, known for their robust multimodal comprehension and scene understanding, significantly advance deep learning and facilitate learning of unknown classes." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.704, + 0.427, + 0.722 + ], + "angle": 0, + "content": "3 Proposed Method" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.738, + 0.419, + 0.751 + ], + "angle": 0, + "content": "3.1 Formula Definition" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.788, + 0.842 + ], + "angle": 0, + "content": "As shown in Fig. 2, we introduce a Visual Association-based Zero-shot Object Counting framework (VA-Count) focusing on zero-shot, class-agnostic object counting. The categories among the training set \\( C_{\\mathrm{train}} \\), validation set \\( C_{\\mathrm{val}} \\), and testing set \\( C_{\\mathrm{test}} \\) are distinguished, ensuring no overlap among them (\\( C_{\\mathrm{train}} \\cap C_{\\mathrm{val}} \\cap C_{\\mathrm{test}} = \\emptyset \\)). VA-Count generates density maps \\( D \\) from input images \\( I \\) for" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.128 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "code_caption", + "bbox": [ + 0.217, + 0.146, + 0.743, + 0.162 + ], + "angle": 0, + "content": "Algorithm 1 Grounding DINO-Guided Exemplar Enhancement Module" + }, + { + "type": "algorithm", + "bbox": [ + 0.217, + 0.164, + 0.788, + 0.503 + ], + "angle": 0, + "content": "1: I: Input image \n2: \\( T^p \\): Positive text label (\\{specific class\\}), \\( T^n \\): Negative text label (\"object\") \n3: \\( B^p \\): Bounding boxes for positive samples, \\( S^p \\): Logits for positive samples \n4: \\( B^n \\): Bounding boxes for negative samples, \\( S^n \\): Logits for negative samples \n5: \\( \\tau_l \\): Logits threshold, \\( \\tau_{\\mathrm{iou}} \\): IoU threshold \n6: M(\\cdot): Single Object Classifier \n7: Input: I, \\( T^p \\), \\( T^n \\) \n8: Output: \\( \\mathcal{O}^p = \\{(B^p, S^p)\\} \\): Positive outputs, \\( \\mathcal{O}^n = \\{(B^n, S^n)\\} \\): Negative outputs \n9: Grounding DINO Process: \n10: F ← ExtractFeatures(I) \n11: \\( S^p, B^p \\gets \\text{Detect}(F, T^p) \\), filter by \\( \\tau_l \\); and \\( S^n, B^n \\gets \\text{Detect}(F, T^n) \\), filter by \\( \\tau_l \\) \n12: Dedduplication and Filtering: \n13: Initialize \\( B_{\\text{filtered}}^n, B_{\\text{new}}^p, B_{\\text{new}}^n \\) \n14: for \\( b^n \\) in \\( B^n \\) do ▷ Remove duplicates \n15: if \\( b^n \\) is unique in \\( B^n \\) with IoU < \\( \\tau_{\\mathrm{iou}} \\) then \n16: \\( B_{\\text{filtered}}^n \\).append\\( (b^n) \\) \n17: end if \n18: end for \n19: for all \\( b \\in B^p \\cup B_{\\text{filtered}}^n \\) do ▷ Single object filter \n20: if \\( M(b) \\) is true then \n21: Add \\( b \\) to the appropriate new set \n22: end if \n23: end for \n24: Update \\( \\mathcal{O}^p, \\mathcal{O}^n \\) with new sets" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.541, + 0.79, + 0.633 + ], + "angle": 0, + "content": "any given class \\( C \\), and counts objects using these density maps. Specifically, VA-Count utilizes pseudo-exemplars \\( E^p \\) to enhance image-text associations, acting as a bridge to establish robust visual correlations between \\( E^p \\) and the images \\( I \\). To extract exemplars from images, we propose the use of two key modules: the Exemplar Enhancement Module (EEM) (cf. Sec. 3.2) and the Noise Suppression Module (NSM) (cf. Sec. 3.3)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.637, + 0.79, + 0.804 + ], + "angle": 0, + "content": "To alleviate the noise introduced by objects belonging to other classes on the target objects within images, the EEM and NSM are simultaneously used to obtain positive exemplars \\( B^{p} \\) and negative exemplars \\( B^{p} \\). The EEM consists of Grounding DINO \\( G(\\cdot) \\) and a filtering module \\( \\varPhi(\\cdot) \\). There are different filtering modules for positive and negative samples \\( \\varPhi^{p}(\\cdot) \\) and \\( \\varPhi^{n}(\\cdot) \\) respectively. \\( \\varPhi^{p}(\\cdot) \\) is a binary classifier, while \\( \\varPhi^{n}(\\cdot) \\) consists of a binary classifier and a dedduplication module. The two kinds of pseudo-exemplars and images are then fed into the Counter \\( \\Gamma(\\cdot) \\) simultaneously for correlation learning. \\( \\Gamma(\\cdot) \\) comprises an image encoder, correlation module, and decoder. The optimization goal of this paper is as follows, where \\( \\mu(\\cdot) \\) denotes the similarity, and \\( D^{p}, D^{n}, D^{g} \\) represent the density maps for positive, negative, and ground truth respectively:" + }, + { + "type": "equation", + "bbox": [ + 0.312, + 0.824, + 0.789, + 0.843 + ], + "angle": 0, + "content": "\\[\nD ^ {p} = \\Gamma \\left(\\Phi^ {p} \\left(G \\left(I, T ^ {p}\\right)\\right)\\right), \\quad D ^ {n} = \\Gamma \\left(\\Phi^ {n} \\left(G \\left(I, T ^ {n}\\right)\\right)\\right), \\tag {1}\n\\]" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.733, + 0.131 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.787, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "equation", + "bbox": [ + 0.384, + 0.159, + 0.788, + 0.201 + ], + "angle": 0, + "content": "\\[\n\\text {O b j e c t i v e} = \\left\\{ \\begin{array}{l} \\max \\mu \\left(D ^ {p}, D ^ {g}\\right), \\\\ \\min \\mu \\left(D ^ {n}, D ^ {g}\\right). \\end{array} \\right. \\tag {2}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.218, + 0.53, + 0.234 + ], + "angle": 0, + "content": "3.2 Exemplar Enhancement Module" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.242, + 0.788, + 0.365 + ], + "angle": 0, + "content": "We introduce an Exemplar Enhancement Module (EEM) for detecting objects within images and refining the detected objects as target exemplars. The workflow of the EEM is outlined in Algorithm 1. The EEM ensures VA-Count's scalability to arbitrary classes by incorporating Vision-Language Pretaining (VLP) models (e.g., Grounding DINO [20]) for potential exemplar discovery, renowned for its efficiency in feature extraction and precision in object localization. Furthermore, the EEM involves meticulously discovering and refining potential exemplars to enhance the quality of positive and negative exemplars for precise object counting." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.365, + 0.79, + 0.515 + ], + "angle": 0, + "content": "Grounding DINO-Guided Box Selection. Given the training set input image \\( I_{i} \\), accompanied by predefined sets of positive text labels \\( T_{i}^{p} = \\{C_{i}\\} \\) and negative text labels \\( T_{i}^{n} = \\text{\"object\"} \\), where \\( C_i \\) represents the specified target class for the input image and \\( T_{i}^{n} \\) is fixed as \"object\". These labels correspond to the target objects and the noise objects, respectively. Taking positive exemplar discovery as an example, Grounding DINO assigns logits value \\( S_{i}^{p} = \\{s_{i,j}\\}_{j=0}^{m} \\) to all candidate bounding boxes \\( B_{i}^{p} = \\{b_{i,j}\\}_{j=0}^{m} \\) based on \\( T_{i}^{p} \\), \\( m \\) denotes the number of candidate boxes within the image. For the \\( j \\)-th box in the \\( i \\)-th image, \\( s_{i,j} \\) represents the likelihood that \\( b_{i,j} \\) belongs to the specified class text \\( C_i \\). The output of positive candidate boxes \\( \\mathcal{O}^p \\) can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.368, + 0.526, + 0.788, + 0.546 + ], + "angle": 0, + "content": "\\[\n\\mathcal {O} ^ {p} = \\{G (I _ {i}, T _ {i} ^ {p}) \\} _ {i = 0} ^ {k} = \\{(B _ {i} ^ {p}, \\mathcal {S} _ {i} ^ {p}) \\} _ {i = 0} ^ {k}, \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.554, + 0.633, + 0.569 + ], + "angle": 0, + "content": "where \\( k \\) denotes the number of images in the training set." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.57, + 0.789, + 0.645 + ], + "angle": 0, + "content": "Negative Samples and Dedduplication. To minimize the impact of irrelevant classes on the counting accuracy of the target object, we adopt a filtering method for negative samples. Initially, we obtain all candidate bounding boxes for objects within each image. Similar to Eq. (3), the negative candidate boxes \\(\\mathcal{O}^n\\) without filtering can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.362, + 0.655, + 0.788, + 0.675 + ], + "angle": 0, + "content": "\\[\n\\mathcal {O} ^ {n} = \\left\\{G \\left(I _ {i}, T _ {i} ^ {n}\\right) \\right\\} _ {i = 0} ^ {k} = \\left\\{\\left(B _ {i} ^ {n}, \\mathcal {S} _ {i} ^ {n}\\right) \\right\\} _ {i = 0} ^ {k}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.684, + 0.788, + 0.729 + ], + "angle": 0, + "content": "where for each image \\( I_{i} \\), the term \\( T_{i}^{n} = \\) \"object\" is employed to identify and generate all bounding boxes \\( B^{n} \\) within that image. This method guarantees the detection of bounding boxes for all objects present in the image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.73, + 0.788, + 0.805 + ], + "angle": 0, + "content": "Then, for each image \\( I_{i} \\), we assess each bounding box \\( b^{n} \\) from the negative candidate boxes \\( B^n \\), and each \\( b^{n} \\) is evaluated to determine its uniqueness in relation to the boxes within \\( B^{p} \\). Specifically, a bounding box is deemed unique if its overlap with any box in \\( B^{p} \\) is minimal, based on the Intersection over Union (IoU) threshold \\( \\tau_{\\mathrm{iou}} \\), which can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.41, + 0.816, + 0.788, + 0.845 + ], + "angle": 0, + "content": "\\[\n\\operatorname {I o U} \\left(B ^ {p}, B ^ {n}\\right) = \\frac {B ^ {p} \\cap B ^ {n}}{B ^ {p} \\cup B ^ {n}}, \\tag {5}\n\\]" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.128 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.193 + ], + "angle": 0, + "content": "where \\( B^p \\cap B^n \\) and \\( B^p \\cup B^n \\) denotes the intersection and union between positive \\( B^p \\) and negative \\( B^n \\) boxes. Unique negative boxes \\( b^n \\) are then included in the final set \\( B_{\\text{filtered}}^n \\) of negative exemplars." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.193, + 0.785, + 0.314 + ], + "angle": 0, + "content": "Single Object Exemplar Filtering. While DINO excels at identifying targets for arbitrary classes, each candidate box does not always contain a single object because boxes encompassing multiple objects may carry higher confidence levels than boxes of single objects. To ensure the integrity of the visual connections established with images, it's imperative to select exemplars that exclusively contain a single object. To achieve this, we treat singular discrimination as a binary classification task, using the binary classifier \\(\\delta(\\cdot)\\) to refine candidate bounding boxes, ensuring each exemplar contains a single object." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.314, + 0.49, + 0.601 + ], + "angle": 0, + "content": "As shown in Fig. 3, \\(\\delta(\\cdot)\\) leverages a frozen Clip-vit backbone, integrated with a trainable Feed-Forward Network (FFN) for binary classification tasks. Training data is meticulously curated, consisting of samples of single and multiple objects. The labeled single-object samples are the exemplars in the training sets, and the labeled multi-object samples consist of randomly cropped patches and the entire image. To ensure that the class-agnostic counting is maintained, the training data is split for training and evaluation with disjoint samples, ensuring robust exemplar assessment. The classification results for positive candidate boxes \\(b^{p} \\in B^{p}\\) can be formulated as:" + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.344, + 0.778, + 0.532 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.497, + 0.541, + 0.788, + 0.598 + ], + "angle": 0, + "content": "Fig. 3: Illustration of the single object exemplar filtering with a frozen Clip-vit encoder and a trainable FFN to distinguish single from multiple objects." + }, + { + "type": "equation", + "bbox": [ + 0.24, + 0.613, + 0.489, + 0.63 + ], + "angle": 0, + "content": "\\[\n\\delta \\left(b ^ {p}\\right) = \\operatorname {F F N} \\left(\\operatorname {C l i p - v i t} \\left(b ^ {p}\\right)\\right), \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.642, + 0.785, + 0.672 + ], + "angle": 0, + "content": "and the filtered set \\( B_{\\mathrm{new}} \\) contains bounding boxes \\( b^{p} \\) that are conditioned on the classification results, which can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.39, + 0.685, + 0.788, + 0.703 + ], + "angle": 0, + "content": "\\[\nB _ {\\text {n e w}} ^ {p} \\leftarrow B _ {\\text {n e w}} ^ {p} \\cup \\{b | \\delta (b ^ {p}) = 1 \\}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.714, + 0.785, + 0.76 + ], + "angle": 0, + "content": "where the symbol \\(\\leftarrow\\) signifies the update operation for the set \\(B_{\\mathrm{new}}^p\\), and the set builder notation \\(\\{b|\\delta(b^p) = 1\\}\\) represents the collection of bounding boxes for which \\(\\delta(b^p)\\) predicts a positive outcome." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.783, + 0.482, + 0.799 + ], + "angle": 0, + "content": "3.3 Noise Suppression Module" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.785, + 0.84 + ], + "angle": 0, + "content": "In the context of the EEM, text-image alignment is redefined as object-image alignment by identifying positive \\( B^{p} \\) and negative \\( B^{n} \\) exemplars. We delves" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.733, + 0.131 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.177 + ], + "angle": 0, + "content": "into generating positive and negative density maps and alleviating the noise introduced by the negative exemplars." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.178, + 0.788, + 0.314 + ], + "angle": 0, + "content": "Initially, for each image \\(I_{i}\\), we select the top three patches with the highest \\(S^p\\) from the positive candidate boxes \\(B_{\\mathrm{new}}^p\\) as positive exemplars \\(E^{p} = \\{b_{i}^{p}\\}_{i = 1}^{k}\\) and the top three patches with the highest \\(S^n\\) from the negative candidate boxes \\(B_{\\mathrm{filtered}}^n\\) as negative exemplars \\(E^n = \\{b_i^n\\}_{i = 1}^k\\). Following CounTR [19], we build the Counter \\(\\Gamma (\\cdot)\\) with feature interaction to fuse information from both image encoders. Specifically, we merge encoder outputs by using image features as queries and the linear projections of sample features as keys and values, ensuring dimension consistency with image features, in accordance with the self-similarity principle in counting, which can be formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.323, + 0.319, + 0.788, + 0.337 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {F} _ {\\text {f u s e}} = \\Gamma_ {\\text {f u s e}} \\left(\\boldsymbol {F} _ {\\text {q u e r y}}, \\boldsymbol {W} ^ {k} \\boldsymbol {F} _ {\\text {k e y}}, \\boldsymbol {W} ^ {v} \\boldsymbol {F} _ {\\text {v a l u e}}\\right) \\in \\mathbb {R} ^ {M \\times D}, \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.341, + 0.785, + 0.417 + ], + "angle": 0, + "content": "where \\(\\pmb{F}\\) denotes the feature representations, \\(\\pmb{W}^k\\) and \\(\\pmb{W}^v\\) are the learnable weights for keys and values from \\(\\{E^p,E^n\\}\\), \\(M\\) denotes the number of tokens, \\(D\\) is the feature dimensionality, and \\(\\mathbb{R}^{M\\times D}\\) the space of the feature matrix. The decoder outputs the density heatmap after up-sampling the fused features to the input image's dimensions:" + }, + { + "type": "equation", + "bbox": [ + 0.344, + 0.424, + 0.788, + 0.441 + ], + "angle": 0, + "content": "\\[\nD _ {i} ^ {n} = \\Gamma_ {\\text {d e c o d e}} \\left(\\boldsymbol {F} _ {\\text {f u s e}} ^ {n}\\right), \\quad D _ {i} ^ {p} = \\Gamma_ {\\text {d e c o d e}} \\left(\\boldsymbol {F} _ {\\text {f u s e}} ^ {p}\\right). \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.446, + 0.788, + 0.581 + ], + "angle": 0, + "content": "Contrastive Learning and Loss Functions. The objective of the NSM in VA-Count is to reduce the impact of noise in images on counting performance while ensuring the accuracy of density map predictions. To achieve this, a contrastive loss \\(\\mathcal{L}_C\\) is proposed, using specified class density maps as positive samples and non-specified class density maps as negative samples. This involves maximizing the similarity between positive density maps and the ground-truth density maps and minimizing the similarity between negative density maps and the ground-truth density maps, as detailed in Eq. (10). To guide density map generation, we use the loss method from CounTR [19]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.582, + 0.788, + 0.642 + ], + "angle": 0, + "content": "The density loss \\(\\mathcal{L}_D\\) is calculated as the mean squared error between each pixel of the density map \\(D_i^p\\) generated for positive samples and the ground-truth density map \\(D_i^g\\), as shown in Eq. (11). \\(H\\) and \\(W\\) respectively denote the height and width of the density map." + }, + { + "type": "equation", + "bbox": [ + 0.282, + 0.648, + 0.788, + 0.68 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {C} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\\right) = - \\log \\frac {\\exp \\sin \\left(D ^ {p} , D ^ {g}\\right)}{\\exp \\sin \\left(D ^ {p} , D ^ {g}\\right) + \\exp \\sin \\left(D ^ {n} , D ^ {g}\\right)}, \\tag {10}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.368, + 0.689, + 0.788, + 0.716 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {D} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}\\right) = \\frac {1}{H W} \\sum \\left\\| D _ {i} ^ {p} - D _ {i} ^ {g} \\right\\| _ {2} ^ {2}, \\tag {11}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.39, + 0.73, + 0.787, + 0.745 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\text {t o t a l}} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\\right) = \\mathcal {L} _ {C} + \\mathcal {L} _ {D}. \\tag {12}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.763, + 0.454, + 0.78 + ], + "angle": 0, + "content": "4 Experimental Result" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.79, + 0.571, + 0.805 + ], + "angle": 0, + "content": "4.1 Datasets and Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.841 + ], + "angle": 0, + "content": "Datasets. FSC-147 [10] dataset is tailored for class-agnostic counting with 6,135 images and 147 classes. Unique for its non-overlapping class subsets, it" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.128 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.176 + ], + "angle": 0, + "content": "provides class labels and dot annotations for zero-shot counting using textual prompts." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.177, + 0.784, + 0.207 + ], + "angle": 0, + "content": "CARPK [11] dataset offers a bird's-eye view of 89,777 cars in 1,448 parking lot images, testing the method's cross-dataset transferability and adaptability." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.208, + 0.785, + 0.265 + ], + "angle": 0, + "content": "Evaluation Metrics. Following previous class-agnostic object counting methods [29], the evaluation metrics employed are Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). MAE is widely used to assess model accuracy, while RMSE evaluates model robustness." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.268, + 0.785, + 0.34 + ], + "angle": 0, + "content": "Exemplar Enhancement Module uses Grounding DINO\\(^7\\) for bounding box proposals, setting the threshold \\(\\tau_{l}\\) to 0.02. For negative sample filtering, the IoU threshold \\(\\tau_{\\mathrm{iou}}\\) is set to 0.5. The single object classifier employs CLIP ViT-B/16\\(^8\\) as its backbone, with an FFN comprising two linear layers, trained over 100 epochs at a learning rate of e-4. The dataset is partitioned in a 7:3 ratio" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.344, + 0.785, + 0.386 + ], + "angle": 0, + "content": "Noise Suppression Module follows CounTR's [19] two-stage training: MAE pretraining and AdamW [25]-optimized fine-tuning. It is trained on FSC-147 with a learning rate of \\(10^{-5}\\), batch size of 8, on an NVIDIA RTX L40 GPU." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.413, + 0.583, + 0.429 + ], + "angle": 0, + "content": "4.2 Comparison with the State-of-the-Arts" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.439, + 0.785, + 0.497 + ], + "angle": 0, + "content": "For the performance evaluation of our method, it is benchmarked against a variety of state-of-the-art few-shot and zero-shot counting methods on FSC-147. Additionally, we evaluate our method in comparison with class-specific counting models on CARPK." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.5, + 0.785, + 0.62 + ], + "angle": 0, + "content": "Quantitative Result on FSC-147. We evaluate the effectiveness of VA-Count on FSC-147, comparing it with state-of-the-art counting methods as detailed in Tab. 1. Our method surpasses the exemplar-discovery method ZSC [45], demonstrating that the exemplars found by VA-Count are of higher quality. VA-Count achieves the best performance in MAE and second in RMSE, validating our method's effectiveness. Despite being second in RMSE, it still outperforms ZSC. In comparison with CLIP-Count [13], VA-Count, due to some noise introduction, has a few inferior samples but, overall, surpasses CLIP-Count in performance." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.621, + 0.785, + 0.74 + ], + "angle": 0, + "content": "Quantitative Result on CARPK. In Tab. 2, VA-Count's cross-domain and non-cross-domain performance on CARPK are compared with previous methods. In the zero-shot group, VA-Count achieves the best performance, particularly with its cross-domain performance methoding that of the few-shot group, demonstrating its outstanding transferability. It is worth noting that employing \\(\\varPhi(\\cdot)\\) significantly reduces errors compared to directly using the Grounding DINO [20] method. In the absence of any training data, VA-Count outperforms FamNet [33] in the cross-domain group." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.742, + 0.784, + 0.801 + ], + "angle": 0, + "content": "Ablation Study. We conduct both quantitative and qualitative analyses on the contributions of each component in our proposed VA-Count, which includes the Grounding-DINO candidate box extraction and filtering module. The quantitative outcomes are presented in Tab. 3. Using only Grounding DINO method" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.811, + 0.59, + 0.825 + ], + "angle": 0, + "content": "7 https://github.com/IDEA-Research/GroundingDINO" + }, + { + "type": "page_footnote", + "bbox": [ + 0.22, + 0.825, + 0.456, + 0.84 + ], + "angle": 0, + "content": "8 https://github.com/openai/CLIP" + }, + { + "type": "list", + "bbox": [ + 0.218, + 0.811, + 0.59, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.401, + 0.115, + 0.732, + 0.13 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.145, + 0.788, + 0.201 + ], + "angle": 0, + "content": "Table 1: Quantitative results of our VA-Count and other state-of-the-art competitors on FSC-147. F-S, R-F, and Z-S are abbreviated for Few-shot, Reference-free, and Zero-shot settings. Best results for each scheme and the second-best results at the zero-shot setting are highlighted in bold and underline." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.213, + 0.787, + 0.519 + ], + "angle": 0, + "content": "
SchemeMethodVenueShotVal SetTest SetAvg
MAERMSEMAERMSEMAERMSE
F-SFamNet [33]CVPR'21324.3270.9422.56101.5423.4486.24
CFOCNet [46]WACV'21321.1961.4122.10112.7121.6587.06
CounTR [19]BMVC'22313.1349.8311.9591.2312.5470.53
LOCA [41]ICCV'23310.2432.5610.9756.9710.6144.77
SAM [36]WACV'243--19.95132.1619.95132.16
PseCo [12]CVPR'24315.3168.3413.05112.8614.1890.60
CACViT [42]AAAI'24310.6337.959.1348.969.8843.46
FamNet [33]CVPR'21126.0577.0126.76110.9526.4193.98
R-FFamNet [33]CVPR'21032.1598.7532.27131.4632.21115.11
RepRPN-C [32]ACCV'22029.2498.1126.66129.1127.95113.61
CounTR [19]BMVC'22018.0771.8414.71106.8716.3989.36
RCC [10]CVPR'23017.4958.8117.12104.5317.3181.67
LOCA [41]ICCV'23017.4354.9616.22103.9616.8379.46
Z-SZSC [45]CVPR'23026.9388.6322.09115.1724.51101.90
CLIP-Count [13]MM'23018.7961.1817.78106.6218.28583.90
PseCo [12]CVPR'24023.90100.3316.58129.7720.24115.05
VA-CountOurs017.8773.2217.88129.3117.87101.26
" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.548, + 0.788, + 0.655 + ], + "angle": 0, + "content": "(first row) achieves an error of 52.82 without training, which, although not as accurate as regression-based methods, ensures the detection of relevant objects. Performance improves slightly after adding a single-object classification filter (second row). With training based on \\(\\mathcal{L}_D\\), it already meets counting requirements. In Tab. 2, we compare using Grounding DINO alone and with a single-object classification filter on CARPK (last three rows). Our binary classifier significantly improves performance, reducing MAE and RMSE by about 10." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.678, + 0.429, + 0.693 + ], + "angle": 0, + "content": "4.3 Qualitative Analysis" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.705, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Analysis of the zero-shot performance. To further ensure the effectiveness of the proposed VA-Count framework, we visualize qualitative results in Fig. 4. We provide a side-by-side comparison of the proposed VA-Count against the few-shot counting method [19]. VA-Count achieves a remarkable resemblance to the ground truth, showcasing the method's nuanced understanding of object boundaries and densities and being less affected by the background noise. Specifically, the first row shows there exists a golden egg drowned by white eggs. The few-shot method struggled with this nuanced differentiation, failing to recognize the golden egg distinctly. In the second row, strawberries near flowers also confound the few-shot" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.128 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.188 + ], + "angle": 0, + "content": "Table 2: Quantitative results of our VA-Count and other state-of-the-art competitors on CARPK. \\(\\varPhi(\\cdot)\\) denotes the single-object classification filter. C and F denote CARPK and FSC-147, respectively." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.199, + 0.782, + 0.381 + ], + "angle": 0, + "content": "
MethodsVenueShotC → CF → C
MAERMSEMAERMSE
FamNet [33]CVPR'21318.1933.6628.8444.47
GMN [26]CVPR'2137.489.90--
BMNet+ [35]CVPR'2235.767.8310.4413.77
CounTR [19]BMVC'2235.757.45--
RCC [10]CVPR'2309.2111.3321.3826.61
CLIP-Count [13]MM'230--11.9616.61
Grounding DINO [20]arXiv'24029.7231.6029.7231.60
Grounding DINO + Φ(·)Ours018.5421.7118.5421.71
VA-CountOurs08.7510.3010.6313.20
" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.395, + 0.788, + 0.451 + ], + "angle": 0, + "content": "Table 3: Ablation study on each component's contribution to the final results on FSC-147. We demonstrate the effectiveness of two parts of our framework and two types of loss: \\( G(\\cdot) \\) for Grounding DINO, \\( \\varPhi(\\cdot) \\) for the single-object filtering section, the density loss \\( \\mathcal{L}_D \\), and the contrastive loss \\( \\mathcal{L}_C \\)." + }, + { + "type": "table", + "bbox": [ + 0.235, + 0.463, + 0.766, + 0.57 + ], + "angle": 0, + "content": "
G(·)φ(·)LDLCVal SetTest Set
MAERMSEMAERMSE
52.82134.4954.48159.30
52.12135.2954.27159.76
19.6373.9418.93116.65
17.8773.2217.88129.31
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.598, + 0.788, + 0.749 + ], + "angle": 0, + "content": "method. These examples emphasize VA-Count's superior ability to identify and differentiate between objects with minor differences. The third row presents a challenging scenario with dense keys partially occluded by hands. This situation tests the model's ability to count tiny, closely situated objects under partial occlusion, showcasing VA-Count's advanced capability to accurately identify and count such challenging objects, which is significantly better than the few-shot method. These results highlight the impact of exemplar selection and the incorporation of negative patches in VA-Count, significantly enhancing its object counting and localization capabilities, and showcasing its innovation in zero-shot object counting." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Analysis of Positive and Negative Exemplars. To make our experiment more straightforward, we also conduct a qualitative analysis of the patch selection. As shown in Fig. 5 and Fig. 6, we illustrate selected positive and negative patches for various categories under a zero-shot setting. Taking a closer look at the positive patches for categories such as crab cakes and green peas, the results show a high degree of accuracy in the model's ability to isolate and highlight the regions" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.733, + 0.13 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.15, + 0.784, + 0.332 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.351, + 0.79, + 0.381 + ], + "angle": 0, + "content": "Fig. 4: Illustration of heatmaps compared with few-shot method [19] on FSC-147. Predicted density map is overlaid on the original RGB image. (Best viewed in zoom in)" + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.401, + 0.784, + 0.593 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.606, + 0.788, + 0.621 + ], + "angle": 0, + "content": "Fig. 5: Illustration of the positive (Pos.) and negative (Neg.) exemplars on FSC-147." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.656, + 0.789, + 0.777 + ], + "angle": 0, + "content": "containing the target objects. This precision underscores the effectiveness of VA-Count framework in discerning relevant features amidst complex backgrounds, affirming its robustness in the exemplar discovery. Negative patches, especially from categories like strawberries and crab cakes, highlight the model's challenges with visually similar or overlapping areas not in the target category, underscoring the need for improved discriminative abilities. This analysis underscores our paper's impact on zero-shot object counting and the importance of refining visual learning and exemplar selection for future advancements." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.79, + 0.841 + ], + "angle": 0, + "content": "Effective of the object exemplar filter. The effectiveness of the object exemplar filter is further evaluated by comparing visualization grounding results with and without the filter. Fig. 7 illustrates this comparison for the category of cars on CARPK. Images without the filter show multiple cars within a single" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.127 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "image", + "bbox": [ + 0.22, + 0.147, + 0.357, + 0.221 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.358, + 0.169, + 0.379, + 0.179 + ], + "angle": 0, + "content": "Pos." + }, + { + "type": "image", + "bbox": [ + 0.383, + 0.157, + 0.498, + 0.223 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.147, + 0.644, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.644, + 0.169, + 0.667, + 0.179 + ], + "angle": 0, + "content": "Pos." + }, + { + "type": "image", + "bbox": [ + 0.67, + 0.157, + 0.784, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.236, + 0.785, + 0.264 + ], + "angle": 0, + "content": "Fig. 6: Illustration of the final positive (Pos.) and negative (Neg.) exemplars for images on CARPK." + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.285, + 0.409, + 0.392 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.411, + 0.285, + 0.595, + 0.39 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.597, + 0.309, + 0.782, + 0.39 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.404, + 0.785, + 0.431 + ], + "angle": 0, + "content": "Fig. 7: Illustration of candidate boxes before and after exemplar filter for images on CARPK." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.465, + 0.788, + 0.572 + ], + "angle": 0, + "content": "bounding box, indicating Grounding DINO's [20] inability to isolate individual objects effectively. Conversely, images with the filter applied demonstrate a significant improvement, with bounding boxes accurately encompassing single cars. This clear distinction highlights the binary classifier's crucial role in ensuring precise object counting by enforcing the single-object criterion within each exemplar, validating the filter's contribution to enhancing the model's accuracy and reliability in VA-Count framework." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.604, + 0.36, + 0.62 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.79, + 0.841 + ], + "angle": 0, + "content": "This paper addresses the challenges in class-agnostic object counting by introducing the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count effectively balances the need for scalability across arbitrary classes with the establishment of robust visual connections, overcoming the limitations of existing Zero-shot Object Counting (ZOC) methods. VA-Count comprises an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively. The EEM utilizes advanced Vision-Language Pre-taining models like Grounding DINO for scalable exemplar discovery, while the NSM mitigates the impact of erroneous exemplars through contrastive learning. VA-Count shows promise in zero-shot counting, performing well on three datasets and offering precise visual associations and scalability. In the future, we will explore and better utilize advanced visual language models." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.145, + 0.393, + 0.163 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.178, + 0.788, + 0.269 + ], + "angle": 0, + "content": "This work was supported in part by the National Natural Science Foundation of China under Grant 62271361, the Sanya Yazhou Bay Science and Technology City Administration scientific research project under Grant 2022KF0021, the Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2023B1515020097, and the National Research Foundation Singapore under the AI Singapore Programme under Grant AISG3-GV-2023-011." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.293, + 0.324, + 0.309 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.325, + 0.787, + 0.354 + ], + "angle": 0, + "content": "1. Arteta, C., Lempitsky, V.S., Zisserman, A.: Counting in the wild. In: Proc. Eur. Conf. Comput. Vis. pp. 483-498 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.355, + 0.787, + 0.395 + ], + "angle": 0, + "content": "2. Bai, Y., Cao, M., Gao, D., Cao, Z., Chen, C., Fan, Z., Nie, L., Zhang, M.: RaSa: Relation and sensitivity aware representation learning for text-based person search. In: Proc. Int. Joint Conf. Artif. Intell. pp. 555-563 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.396, + 0.787, + 0.422 + ], + "angle": 0, + "content": "3. Bansal, A., Sikka, K., Sharma, G., Chellappa, R., Divakaran, A.: Zero-shot object detection. In: Proc. Eur. Conf. Comput. Vis. pp. 397-414 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.424, + 0.787, + 0.464 + ], + "angle": 0, + "content": "4. Chai, L., Liu, Y., Liu, W., Han, G., He, S.: CrowdGAN: Identity-free interactive crowd video generation and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 44(6), 2856-2871 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.465, + 0.787, + 0.506 + ], + "angle": 0, + "content": "5. Chen, C., Ye, M., Jiang, D.: Towards modality-agnostic person re-identification with descriptive query. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15128-15137 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.507, + 0.787, + 0.548 + ], + "angle": 0, + "content": "6. Dou, Z., Kamath, A., Gan, Z., Zhang, P., Wang, J., Li, L., Liu, Z., Liu, C., LeCun, Y., Peng, N., Gao, J., Wang, L.: Coarse-to-fine vision-language pre-training with fusion in the backbone. In: Adv. Neural Inf. Process. Syst. pp. 32942-32956 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.549, + 0.787, + 0.59 + ], + "angle": 0, + "content": "7. Du, Y., Wei, F., Zhang, Z., Shi, M., Gao, Y., Li, G.: Learning to prompt for open-vocabulary object detection with vision-language model. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 14084-14093 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.591, + 0.787, + 0.618 + ], + "angle": 0, + "content": "8. Gong, S., Zhang, S., Yang, J., Dai, D., Schiele, B.: Class-agnostic object counting robust to intraclass diversity. In: Proc. Eur. Conf. Comput. Vis. pp. 388-403 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.619, + 0.787, + 0.659 + ], + "angle": 0, + "content": "9. He, S., Chen, W., Wang, K., Luo, H., Wang, F., Jiang, W., Ding, H.: Region generation and assessment network for occluded person re-identification. IEEE Trans. Inf. Forensics Secur. 19, 120–132 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.66, + 0.787, + 0.701 + ], + "angle": 0, + "content": "0. Hobley, M., Prisacariu, V.: Learning to count anything: Reference-less class-agnostic counting with weak supervision. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.702, + 0.787, + 0.742 + ], + "angle": 0, + "content": "1. Hsieh, M., Lin, Y., Hsu, W.H.: Drone-based object counting by spatially regularized regional proposal network. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 4165-4173 (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.743, + 0.787, + 0.771 + ], + "angle": 0, + "content": "2. Huang, Z., Dai, M., Zhang, Y., Zhang, J., Shan, H.: Point, segment and count: A generalized framework for object counting. arXiv:2311.12386 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.772, + 0.787, + 0.799 + ], + "angle": 0, + "content": "3. Jiang, R., Liu, L., Chen, C.: CLIP-Count: Towards text-guided zero-shot object counting. In: Proc. ACM Multimedia. pp. 4535-4545 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.8, + 0.787, + 0.84 + ], + "angle": 0, + "content": "4. Kang, S., Moon, W., Kim, E., Heo, J.: VLCounter: Text-aware visual representation for zero-shot object counting. In: Proc. AAAI Conf. Artif. Intell. pp. 2714-2722 (2024)" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.325, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.361, + 0.127 + ], + "angle": 0, + "content": "H. Zhu et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "15. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W., Dollár, P., Girshick, R.B.: Segment anything. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 3992-4003 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.19, + 0.788, + 0.232 + ], + "angle": 0, + "content": "16. Li, J., Li, D., Savarese, S., Hoi, S.C.H.: BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In: Proc. Int. Conf. Mach. Learn. pp. 19730-19742 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.232, + 0.788, + 0.272 + ], + "angle": 0, + "content": "17. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Proc. Int. Conf. Mach. Learn. pp. 12888-12900 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.273, + 0.788, + 0.314 + ], + "angle": 0, + "content": "18. Li, S., Sun, L., Li, Q.: CLIP-ReID: Exploiting vision-language model for image re-identification without concrete text labels. In: Proc. AAAI Conf. Artif. Intell. pp. 1405-1413 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.315, + 0.788, + 0.342 + ], + "angle": 0, + "content": "19. Liu, C., Zhong, Y., Zisserman, A., Xie, W.: CounTR: Transformer-based generalised visual counting. In: Proc. Brit. Mach. Vis. Conf. p. 370 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.343, + 0.788, + 0.383 + ], + "angle": 0, + "content": "20. Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., Zhang, L.: Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection. arXiv:2303.05499 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.384, + 0.788, + 0.425 + ], + "angle": 0, + "content": "21. Liu, X., Yang, J., Ding, W., Wang, T., Wang, Z., Xiong, J.: Adaptive mixture regression network with local counting map for crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 241-257 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.425, + 0.788, + 0.466 + ], + "angle": 0, + "content": "22. Liu, Y., Ren, S., Chai, L., Wu, H., Xu, D., Qin, J., He, S.: Reducing spatial labeling redundancy for active semi-supervised crowd counting. IEEE Trans. Pattern Anal. Mach. Intell. 45(7), 9248-9255 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.467, + 0.788, + 0.494 + ], + "angle": 0, + "content": "23. Liu, Y., Wen, Q., Chen, H., Liu, W., Qin, J., Han, G., He, S.: Crowd counting via cross-stage refinement networks. IEEE Trans. Image Process. 29, 6800-6812 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.495, + 0.788, + 0.535 + ], + "angle": 0, + "content": "24. Liu, Y., Xu, D., Ren, S., Wu, H., Cai, H., He, S.: Fine-grained domain adaptive crowd counting via point-derived segmentation. In: Proc. IEEE Int. Conf. Multimedia Expo. pp. 2363-2368 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.536, + 0.788, + 0.563 + ], + "angle": 0, + "content": "25. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proc. Int. Conf. Learn. Represent. (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.564, + 0.788, + 0.591 + ], + "angle": 0, + "content": "26. Lu, E., Xie, W., Zisserman, A.: Class-agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 669-684 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.592, + 0.788, + 0.632 + ], + "angle": 0, + "content": "27. Ming, Y., Cai, Z., Gu, J., Sun, Y., Li, W., Li, Y.: Delving into out-of-distribution detection with vision-language representations. In: Adv. Neural Inf. Process. Syst. pp. 35087-35102 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.633, + 0.788, + 0.674 + ], + "angle": 0, + "content": "28. Mundhenk, T.N., Konjevod, G., Sakla, W.A., Boakye, K.: A large contextual dataset for classification, detection and counting of cars with deep learning. In: Proc. Eur. Conf. Comput. Vis. pp. 785-800 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.675, + 0.788, + 0.702 + ], + "angle": 0, + "content": "29. Nguyen, T., Pham, C., Nguyen, K., Hoai, M.: Few-shot object counting and detection. In: Proc. Eur. Conf. Comput. Vis. pp. 348-365 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.702, + 0.788, + 0.757 + ], + "angle": 0, + "content": "30. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Proc. Int. Conf. Mach. Learn. pp. 8748-8763 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.758, + 0.788, + 0.785 + ], + "angle": 0, + "content": "31. Ranjan, V., Le, H.M., Hoai, M.: Iterative crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 278-293 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.786, + 0.788, + 0.813 + ], + "angle": 0, + "content": "32. Ranjan, V., Nguyen, M.H.: Exemplar free class agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 71-87 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.814, + 0.788, + 0.841 + ], + "angle": 0, + "content": "33. Ranjan, V., Sharma, U., Nguyen, T., Hoai, M.: Learning to count everything. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 3394-3403 (2021)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.788, + 0.841 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.4, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Object Counting with Good Exemplars" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "34. Sam, D.B., Agarwalla, A., Joseph, J., Sindagi, V.A., Babu, R.V., Patel, V.M.: Completely self-supervised crowd counting via distribution matching. In: Proc. Eur. Conf. Comput. Vis. pp. 186-204 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.189, + 0.788, + 0.23 + ], + "angle": 0, + "content": "35. Shi, M., Lu, H., Feng, C., Liu, C., Cao, Z.: Represent, compare, and learn: A similarity-aware framework for class-agnostic counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 9529–9538 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.23, + 0.788, + 0.257 + ], + "angle": 0, + "content": "36. Shi, Z., Sun, Y., Zhang, M.: Training-free object counting with prompts. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 323-331 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.257, + 0.788, + 0.298 + ], + "angle": 0, + "content": "37. Song, S., Wan, J., Yang, Z., Tang, J., Cheng, W., Bai, X., Yao, C.: Vision-language pre-training for boosting scene text detectors. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15681-15691 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.298, + 0.788, + 0.339 + ], + "angle": 0, + "content": "38. Sun, G., An, Z., Liu, Y., Liu, C., Sakaridis, C., Fan, D., Van Gool, L.: Indiscernible object counting in underwater scenes. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 13791-13801 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.339, + 0.788, + 0.366 + ], + "angle": 0, + "content": "39. Tian, C., Zhang, X., Liang, X., Li, B., Sun, Y., Zhang, S.: Knowledge distillation with fast CNN for license plate detection. IEEE Trans. Intell. Transp. Syst. (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.366, + 0.788, + 0.421 + ], + "angle": 0, + "content": "40. Tyagi, A.K., Mohapatra, C., Das, P., Makharia, G., Mehra, L., AP, P., Mausam: DeGPR: Deep guided posterior regularization for multi-class cell detection and counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 23913-23923 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.421, + 0.788, + 0.462 + ], + "angle": 0, + "content": "41. Dukic, N., Lukezic, A., Zavrtanik, V., Kristan, M.: A low-shot object counting network with iterative prototype adaptation. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 18872-18881 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.462, + 0.788, + 0.503 + ], + "angle": 0, + "content": "42. Wang, Z., Xiao, L., Cao, Z., Lu, H.: Vision transformer off-the-shelf: A surprising baseline for few-shot class-agnostic counting. In: Proc. AAAI Conf. Artif. Intell. pp. 5832-5840 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.503, + 0.788, + 0.544 + ], + "angle": 0, + "content": "43. Xie, D., Liu, L., Zhang, S., Tian, J.: A unified multi-modal structure for retrieving tracked vehicles through natural language descriptions. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops. pp. 5418-5426 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.544, + 0.788, + 0.585 + ], + "angle": 0, + "content": "44. Xiong, Z., Chai, L., Liu, W., Liu, Y., Ren, S., He, S.: Glance to count: Learning to rank with anchors for weakly-supervised crowd counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 342-351 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.585, + 0.788, + 0.612 + ], + "angle": 0, + "content": "45. Xu, J., Le, H., Nguyen, V., Ranjan, V., Samaras, D.: Zero-shot object counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15548-15557 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.612, + 0.788, + 0.639 + ], + "angle": 0, + "content": "46. Yang, S., Su, H., Hsu, W.H., Chen, W.: Class-agnostic few-shot object counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 869-877 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.639, + 0.788, + 0.68 + ], + "angle": 0, + "content": "47. You, Z., Yang, K., Luo, W., Lu, X., Cui, L., Le, X.: Few-shot object counting with similarity-aware feature enhancement. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 6304-6313 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.68, + 0.788, + 0.72 + ], + "angle": 0, + "content": "48. Zhang, Z., Liu, K., Gao, F., Li, X., Wang, G.: Vision-based vehicle detecting and counting for traffic flow analysis. In: Proc. IEEE Int. Joint Conf. Neural Networks. pp. 2267-2273 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.721, + 0.788, + 0.748 + ], + "angle": 0, + "content": "49. Zheng, Y., Wu, J., Qin, Y., Zhang, F., Cui, L.: Zero-shot instance segmentation. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 2593-2602 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.748, + 0.788, + 0.789 + ], + "angle": 0, + "content": "50. Zhu, H., Yuan, J., Zhong, X., Liao, L., Wang, Z.: Find gold in sand: Fine-grained similarity mining for domain-adaptive crowd counting. IEEE Trans. Multimedia 26, 3842-3855 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.789, + 0.788, + 0.83 + ], + "angle": 0, + "content": "51. Zhu, H., Yuan, J., Zhong, X., Yang, Z., Wang, Z., He, S.: DAOT: Domain-agnostically aligned optimal transport for domain-adaptive crowd counting. In: Proc. ACM Multimedia. pp. 4319-4329 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.83 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_origin.pdf b/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b58c23e0411de32f7b6dd7983f6a610e670c3bc4 --- /dev/null +++ b/2024/Zero-shot Object Counting with Good Exemplars/1dff8a9f-b79c-4fb3-9456-d993f97bffd3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65a120f3041d1cca08abbc96d98d0518cb28c63c8508a750585b029e0f3e36d2 +size 2317275 diff --git a/2024/Zero-shot Object Counting with Good Exemplars/full.md b/2024/Zero-shot Object Counting with Good Exemplars/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d508f46f06f2a804656aa5571ed8d576857a88c4 --- /dev/null +++ b/2024/Zero-shot Object Counting with Good Exemplars/full.md @@ -0,0 +1,333 @@ +# Zero-shot Object Counting with Good Exemplars + +Huilin Zhu $^{1,2,3,\dagger}$ , Jingling Yuan $^{1,2,\dagger}$ , Zhengwei Yang $^{4,\dagger}$ , Yu Guo $^{3,5}$ , Zheng Wang $^{4}$ , Xian Zhong $^{1,2,6(\text{四})}$ , and Shengfeng He $^{3(\text{四})}$ + +1 Sanya Science and Education Innovation Park, Wuhan University of Technology +2 Hubei Key Laboratory of Transportation Internet of Things, School of Computer Science and Artificial Intelligence, Wuhan University of Technology + +zhongx@whut.edu.cn + +3 School of Computing and Information Systems, Singapore Management University shengfenghe@smu.edu.sg + +$^{4}$ School of Computer Science, Wuhan University +5 School of Navigation, Wuhan University of Technology +$^{6}$ ROSE@EEE, Nanyang Technological University + +Equal Contribution + +https://github.com/HopooLinZ/VA-Count + +Abstract. Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations. However, a critical challenge in current ZOC methods lies in their inability to identify high-quality exemplars effectively. This deficiency hampers scalability across diverse classes and undermines the development of strong visual associations between the identified classes and image content. To this end, we propose the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count consists of an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM) that synergistically refine the process of class exemplar identification while minimizing the consequences of incorrect object identification. The EEM utilizes advanced vision-language pre-taining models to discover potential exemplars, ensuring the framework's adaptability to various classes. Meanwhile, the NSM employs contrastive learning to differentiate between optimal and suboptimal exemplar pairs, reducing the negative effects of erroneous exemplars. VA-Count demonstrates its effectiveness and scalability in zero-shot contexts with superior performance on two object counting datasets. + +# 1 Introduction + +In visual monitoring applications, object counting plays a critical role in analyzing images or videos. Traditional methods focus on high precision within predefined object categories, such as crowds [4, 23], vehicles, and cells [1, 34, 39, 40, 44]. Yet, these methods are limited to specific categories, lacking the flexibility to adapt to new, unseen classes. To address these challenges, class-agnostic methods have been developed for scenarios with unseen classes. These methods, including few-shot, reference-free, and zero-shot object counting [12, 32, 35, 46, 47], provide varying levels of independence from predefined object classes. + +![](images/5711ecdb9fded11199d37d21250d794eee6570aa10a0f84b2e75684181b3e47e.jpg) +Fig. 1: Illustration of class-agnostic object counting methods. (a) Few-shot uses limited annotations for counting. (b) Reference-free quantifies objects without annotations. (c) Zero-shot counts specific classes without annotations, further divided into: (c1) Image-text association, leveraging direct image-text correlations. (c2) Class-related exemplar search, using prototypes to link classes with images. (c3) Our method introduces a detection-driven exemplar discovery to harmonize text with visual representations, distinguishing it from prior methods. + +In this context, different strategies are adopted for object counting under varying constraints, as illustrated in Fig. 1. Few-shot counting methods [29,46,47], depicted in Fig. 1(a), method the task as a matching problem, using a small number of annotated bounding boxes to identify and count objects throughout the image. While effective, this method requires fine-tuning with annotations from novel classes, limiting its scalability in real-world surveillance settings due to the sparse availability of annotated bounding boxes. To circumvent the limitations of bounding box annotations, reference-free counting methods are developed [10,19,32,41], as shown in Fig. 1(b). These methods aim to ascertain the total number of objects in an image without relying on specific cues. Nevertheless, the lack of specificity in counting categories makes these methods prone to errors induced by background noise, as they indiscriminately count all visible objects, leading to a lack of control in the counting process. + +In pursuit of more scalable and realistic counting solutions, zero-shot methods [3, 45, 49], illustrated in Fig. 1(c), are introduced. These techniques are designed to count objects from specified classes within an image without prior annotations for those classes, addressing the limitations of both few-shot and reference-free methods by providing enhanced specificity and scalability. These methods can be categorized into two streams. The initial method [13, 14] leans on image-text alignment to comprehend object-related correlations without needing physical exemplars. This method enhances scalability for unidentified classes but + +struggles with adequately representing image details for target classes, especially those with atypical shapes, as demonstrated in Fig. 1(c1). Conversely, the second method [45] concentrates on identifying objects through the discovery of class-relevant exemplars. This is achieved by creating pseudo labels that assess the resemblance between image patches and class-generated prototypes. Nevertheless, this method's reliance on arbitrary patch selection hampers its ability to accurately outline entire objects. Additionally, the absence of direct text-image engagement restricts its scalability, tethered to the pre-defined categories present in the training dataset, as illustrated in Fig. 1(c2). + +As shown in Fig. 1(c3), we introduce the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count aims to create a robust link between specific object categories and their corresponding visual representations, ensuring adaptability to various classes. This framework is anchored by three core principles. First, it prioritizes flexibility and scalability, enabling adaptation to novel classes beyond its initial parameters. Second, it enhances precision in identifying exemplary objects, strengthening the connection between visual depictions and their categories. Third, it devises strategies to reduce the effects of localization errors on counting precision. Building on these principles, VA-Count integrates an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively. + +In detail, the EEM expands VA-Count's capacity to handle various classes through the integration of Vision-Language Pretaining (VLP) models, such as Grounding DINO [20]. These VLP models, trained on extensive datasets, excel in identifying a wide range of classes by defining specific categories. In the context of ZOC, it is essential to select exemplars that each contain precisely one object from among the potential bounding boxes that might encompass varying object quantities. To this end, we deploy a binary filter aimed at rigorously refining the set of candidate exemplars, excluding those that fail to comply with the single-object requirement. This filtration step is pivotal for ensuring the precision and consistency necessary for ZOC. + +Moreover, even when potential exemplars accurately represent single objects, the unintentional inclusion of exemplars not pertaining to the target category poses a persistent problem. This misalignment introduces uncertainty into the learning process that associates exemplars with images. To counteract this issue, the NSM module operates as a safeguard by identifying negative exemplars, which are unrelated to the intended category. Contrasting with the EEM, which focuses on selecting ideal samples to foster visual connections with images, the NSM employs samples from irrelevant classes to build these associations, utilizing contrastive learning to differentiate between them. This method of contrastive learning acts as a rectifying mechanism, markedly improving the accuracy and efficiency of the associative learning framework. + +In summary, our contributions are threefold: + +- We introduce a Visual Association-based Zero-shot Object Counting framework, which facilitates high-quality exemplar identification for any class + +without needing annotated examples and forges robust visual connections between objects and images. + +- We propose an exemplar enhancement model leveraging the universal class-agnostic detection capabilities of the Vision-Language Pretaining model for precise exemplar selection, and a Noise Suppression Module to minimize the adverse effects of incorrect samples in visual associative learning. + +- Extensive experiments conducted on two object counting datasets demonstrate the state-of-the-art accuracy and generalizability of VA-Count, underscoring its notable scalability. + +# 2 Related Work + +# 2.1 Class-Specific Object Counting + +Object counting plays a crucial role in public safety, public administration, and the liberation of human labor. Currently, class-specific object counting [22,32, 35,46,47] is the predominant method, which entails identifying specific object categories (such as humans [21,24,31,50,51], vehicles [28,48], fishes [38], cells [40], etc.) leveraging object detection or density estimation and counting accordingly. While these methods show excellence within close-set scenarios with a fixed number of categories, transferring them to arbitrary categories poses challenges. Introducing novel categories necessitates retraining or fine-tuning a counting model with new data, which limits their applicability in real scenarios. + +# 2.2 Class-Agnostic Object Counting + +Class-agnostic object counting [8, 26, 29, 36, 42] is proposed for scenarios with less data, which can be divided into few-shot and zero-shot depending on the annotation usage. Specifically, GMN [26] initially frames the class-agnostic counting task as a matching task, leading to FamNet [33], which implements ROI Pooling for broad applicability across FSC-147. As multi-class datasets emerged, the focus shifts towards few-shot methods, where LOCA [41] enhances feature representation and exemplar adaptation; and CounTR [19] utilizes transformers for scalable counting with a two-stage training model. BMNet [?] innovates with a bilinear matching network for refined object similarity assessments. In the realm of zero-shot methods, which are categorized into two types, methods like ZSC [45] leverage textual inputs to generate prototypes and filter image patches, thus reducing the need for extensive labeling, albeit with fixed generators that limit scalability. CLIP-Count [13] employs CLIP to encode text and images separately, establishing semantic associations crucial for intuitive counting. VL-Count [14] takes this further by enhancing CLIP's text-image association learning specifically for object counting. Additionally, PseCo [12] introduces a SAM-based multi-task framework that achieves segmentation, dot mapping, and detection on counting data, offering broad application prospects but also necessitating greater computational resources. + +![](images/7deda26ca26686abed708e110485281bf583700896371b1c67045eaac55f7beb.jpg) +Fig. 2: Overview of the proposed method. Proposed method focuses on two main elements: the Exemplar Enhancement Module (EEM) for improving exemplar quality through a patch selection integrated with Grounding DINO [20], and the Noise Suppression Module (NSM) that distinguishes between positive and negative class samples using density maps. It employs a Contrastive Loss function to refine the precision in identifying target class objects from others in an image. + +# 2.3 Vision-Language Pretaining Model + +In recent years, Vision-Language Pretaining (VLP) methods have proven pivotal in enhancing scene understanding and representation learning capabilities. Their adaptability makes them applicable across a wide range of downstream tasks [2,5-7,9,18,27,37,43]. CLIP [30] segregates vision and language features, aligning them through contrastive learning. BLIP [17] introduces a multimodal mixture of encoders and decoders to align different modalities. Building upon this, BLIP2 [16] combines specialized vision and language models to enhance multimodal understanding capabilities through bootstrapping. Grounding DINO [20] incorporates language into close-set detection, improving generalization for open-set detection. The Segment Anything Model (SAM) [15] is based on a prompt-based segmentation task, allowing flexible prompts for zero-shot capabilities across diverse tasks. VLP models, known for their robust multimodal comprehension and scene understanding, significantly advance deep learning and facilitate learning of unknown classes. + +# 3 Proposed Method + +# 3.1 Formula Definition + +As shown in Fig. 2, we introduce a Visual Association-based Zero-shot Object Counting framework (VA-Count) focusing on zero-shot, class-agnostic object counting. The categories among the training set $C_{\mathrm{train}}$ , validation set $C_{\mathrm{val}}$ , and testing set $C_{\mathrm{test}}$ are distinguished, ensuring no overlap among them ( $C_{\mathrm{train}} \cap C_{\mathrm{val}} \cap C_{\mathrm{test}} = \emptyset$ ). VA-Count generates density maps $D$ from input images $I$ for + +Algorithm 1 Grounding DINO-Guided Exemplar Enhancement Module +1: I: Input image +2: $T^p$ : Positive text label (\{specific class\}), $T^n$ : Negative text label ("object") +3: $B^p$ : Bounding boxes for positive samples, $S^p$ : Logits for positive samples +4: $B^n$ : Bounding boxes for negative samples, $S^n$ : Logits for negative samples +5: $\tau_l$ : Logits threshold, $\tau_{\mathrm{iou}}$ : IoU threshold +6: M(\cdot): Single Object Classifier +7: Input: I, $T^p$ , $T^n$ +8: Output: $\mathcal{O}^p = \{(B^p, S^p)\}$ : Positive outputs, $\mathcal{O}^n = \{(B^n, S^n)\}$ : Negative outputs +9: Grounding DINO Process: +10: F ← ExtractFeatures(I) +11: $S^p, B^p \gets \text{Detect}(F, T^p)$ , filter by $\tau_l$ ; and $S^n, B^n \gets \text{Detect}(F, T^n)$ , filter by $\tau_l$ +12: Dedduplication and Filtering: +13: Initialize $B_{\text{filtered}}^n, B_{\text{new}}^p, B_{\text{new}}^n$ +14: for $b^n$ in $B^n$ do ▷ Remove duplicates +15: if $b^n$ is unique in $B^n$ with IoU < $\tau_{\mathrm{iou}}$ then +16: $B_{\text{filtered}}^n$ .append $(b^n)$ +17: end if +18: end for +19: for all $b \in B^p \cup B_{\text{filtered}}^n$ do ▷ Single object filter +20: if $M(b)$ is true then +21: Add $b$ to the appropriate new set +22: end if +23: end for +24: Update $\mathcal{O}^p, \mathcal{O}^n$ with new sets + +any given class $C$ , and counts objects using these density maps. Specifically, VA-Count utilizes pseudo-exemplars $E^p$ to enhance image-text associations, acting as a bridge to establish robust visual correlations between $E^p$ and the images $I$ . To extract exemplars from images, we propose the use of two key modules: the Exemplar Enhancement Module (EEM) (cf. Sec. 3.2) and the Noise Suppression Module (NSM) (cf. Sec. 3.3). + +To alleviate the noise introduced by objects belonging to other classes on the target objects within images, the EEM and NSM are simultaneously used to obtain positive exemplars $B^{p}$ and negative exemplars $B^{p}$ . The EEM consists of Grounding DINO $G(\cdot)$ and a filtering module $\varPhi(\cdot)$ . There are different filtering modules for positive and negative samples $\varPhi^{p}(\cdot)$ and $\varPhi^{n}(\cdot)$ respectively. $\varPhi^{p}(\cdot)$ is a binary classifier, while $\varPhi^{n}(\cdot)$ consists of a binary classifier and a dedduplication module. The two kinds of pseudo-exemplars and images are then fed into the Counter $\Gamma(\cdot)$ simultaneously for correlation learning. $\Gamma(\cdot)$ comprises an image encoder, correlation module, and decoder. The optimization goal of this paper is as follows, where $\mu(\cdot)$ denotes the similarity, and $D^{p}, D^{n}, D^{g}$ represent the density maps for positive, negative, and ground truth respectively: + +$$ +D ^ {p} = \Gamma \left(\Phi^ {p} \left(G \left(I, T ^ {p}\right)\right)\right), \quad D ^ {n} = \Gamma \left(\Phi^ {n} \left(G \left(I, T ^ {n}\right)\right)\right), \tag {1} +$$ + +$$ +\text {O b j e c t i v e} = \left\{ \begin{array}{l} \max \mu \left(D ^ {p}, D ^ {g}\right), \\ \min \mu \left(D ^ {n}, D ^ {g}\right). \end{array} \right. \tag {2} +$$ + +# 3.2 Exemplar Enhancement Module + +We introduce an Exemplar Enhancement Module (EEM) for detecting objects within images and refining the detected objects as target exemplars. The workflow of the EEM is outlined in Algorithm 1. The EEM ensures VA-Count's scalability to arbitrary classes by incorporating Vision-Language Pretaining (VLP) models (e.g., Grounding DINO [20]) for potential exemplar discovery, renowned for its efficiency in feature extraction and precision in object localization. Furthermore, the EEM involves meticulously discovering and refining potential exemplars to enhance the quality of positive and negative exemplars for precise object counting. + +Grounding DINO-Guided Box Selection. Given the training set input image $I_{i}$ , accompanied by predefined sets of positive text labels $T_{i}^{p} = \{C_{i}\}$ and negative text labels $T_{i}^{n} = \text{"object"}$ , where $C_i$ represents the specified target class for the input image and $T_{i}^{n}$ is fixed as "object". These labels correspond to the target objects and the noise objects, respectively. Taking positive exemplar discovery as an example, Grounding DINO assigns logits value $S_{i}^{p} = \{s_{i,j}\}_{j=0}^{m}$ to all candidate bounding boxes $B_{i}^{p} = \{b_{i,j}\}_{j=0}^{m}$ based on $T_{i}^{p}$ , $m$ denotes the number of candidate boxes within the image. For the $j$ -th box in the $i$ -th image, $s_{i,j}$ represents the likelihood that $b_{i,j}$ belongs to the specified class text $C_i$ . The output of positive candidate boxes $\mathcal{O}^p$ can be formulated as: + +$$ +\mathcal {O} ^ {p} = \{G (I _ {i}, T _ {i} ^ {p}) \} _ {i = 0} ^ {k} = \{(B _ {i} ^ {p}, \mathcal {S} _ {i} ^ {p}) \} _ {i = 0} ^ {k}, \tag {3} +$$ + +where $k$ denotes the number of images in the training set. + +Negative Samples and Dedduplication. To minimize the impact of irrelevant classes on the counting accuracy of the target object, we adopt a filtering method for negative samples. Initially, we obtain all candidate bounding boxes for objects within each image. Similar to Eq. (3), the negative candidate boxes $\mathcal{O}^n$ without filtering can be formulated as: + +$$ +\mathcal {O} ^ {n} = \left\{G \left(I _ {i}, T _ {i} ^ {n}\right) \right\} _ {i = 0} ^ {k} = \left\{\left(B _ {i} ^ {n}, \mathcal {S} _ {i} ^ {n}\right) \right\} _ {i = 0} ^ {k}, \tag {4} +$$ + +where for each image $I_{i}$ , the term $T_{i}^{n} =$ "object" is employed to identify and generate all bounding boxes $B^{n}$ within that image. This method guarantees the detection of bounding boxes for all objects present in the image. + +Then, for each image $I_{i}$ , we assess each bounding box $b^{n}$ from the negative candidate boxes $B^n$ , and each $b^{n}$ is evaluated to determine its uniqueness in relation to the boxes within $B^{p}$ . Specifically, a bounding box is deemed unique if its overlap with any box in $B^{p}$ is minimal, based on the Intersection over Union (IoU) threshold $\tau_{\mathrm{iou}}$ , which can be formulated as: + +$$ +\operatorname {I o U} \left(B ^ {p}, B ^ {n}\right) = \frac {B ^ {p} \cap B ^ {n}}{B ^ {p} \cup B ^ {n}}, \tag {5} +$$ + +where $B^p \cap B^n$ and $B^p \cup B^n$ denotes the intersection and union between positive $B^p$ and negative $B^n$ boxes. Unique negative boxes $b^n$ are then included in the final set $B_{\text{filtered}}^n$ of negative exemplars. + +Single Object Exemplar Filtering. While DINO excels at identifying targets for arbitrary classes, each candidate box does not always contain a single object because boxes encompassing multiple objects may carry higher confidence levels than boxes of single objects. To ensure the integrity of the visual connections established with images, it's imperative to select exemplars that exclusively contain a single object. To achieve this, we treat singular discrimination as a binary classification task, using the binary classifier $\delta(\cdot)$ to refine candidate bounding boxes, ensuring each exemplar contains a single object. + +As shown in Fig. 3, $\delta(\cdot)$ leverages a frozen Clip-vit backbone, integrated with a trainable Feed-Forward Network (FFN) for binary classification tasks. Training data is meticulously curated, consisting of samples of single and multiple objects. The labeled single-object samples are the exemplars in the training sets, and the labeled multi-object samples consist of randomly cropped patches and the entire image. To ensure that the class-agnostic counting is maintained, the training data is split for training and evaluation with disjoint samples, ensuring robust exemplar assessment. The classification results for positive candidate boxes $b^{p} \in B^{p}$ can be formulated as: + +![](images/1d2a5d26f6d67a0f1c228b374e846fa3da98af34ff3c22f4d036d8bd4fce9f35.jpg) +Fig. 3: Illustration of the single object exemplar filtering with a frozen Clip-vit encoder and a trainable FFN to distinguish single from multiple objects. + +$$ +\delta \left(b ^ {p}\right) = \operatorname {F F N} \left(\operatorname {C l i p - v i t} \left(b ^ {p}\right)\right), \tag {6} +$$ + +and the filtered set $B_{\mathrm{new}}$ contains bounding boxes $b^{p}$ that are conditioned on the classification results, which can be formulated as: + +$$ +B _ {\text {n e w}} ^ {p} \leftarrow B _ {\text {n e w}} ^ {p} \cup \{b | \delta (b ^ {p}) = 1 \}, \tag {7} +$$ + +where the symbol $\leftarrow$ signifies the update operation for the set $B_{\mathrm{new}}^p$ , and the set builder notation $\{b|\delta(b^p) = 1\}$ represents the collection of bounding boxes for which $\delta(b^p)$ predicts a positive outcome. + +# 3.3 Noise Suppression Module + +In the context of the EEM, text-image alignment is redefined as object-image alignment by identifying positive $B^{p}$ and negative $B^{n}$ exemplars. We delves + +into generating positive and negative density maps and alleviating the noise introduced by the negative exemplars. + +Initially, for each image $I_{i}$ , we select the top three patches with the highest $S^p$ from the positive candidate boxes $B_{\mathrm{new}}^p$ as positive exemplars $E^{p} = \{b_{i}^{p}\}_{i = 1}^{k}$ and the top three patches with the highest $S^n$ from the negative candidate boxes $B_{\mathrm{filtered}}^n$ as negative exemplars $E^n = \{b_i^n\}_{i = 1}^k$ . Following CounTR [19], we build the Counter $\Gamma (\cdot)$ with feature interaction to fuse information from both image encoders. Specifically, we merge encoder outputs by using image features as queries and the linear projections of sample features as keys and values, ensuring dimension consistency with image features, in accordance with the self-similarity principle in counting, which can be formulated as: + +$$ +\boldsymbol {F} _ {\text {f u s e}} = \Gamma_ {\text {f u s e}} \left(\boldsymbol {F} _ {\text {q u e r y}}, \boldsymbol {W} ^ {k} \boldsymbol {F} _ {\text {k e y}}, \boldsymbol {W} ^ {v} \boldsymbol {F} _ {\text {v a l u e}}\right) \in \mathbb {R} ^ {M \times D}, \tag {8} +$$ + +where $\pmb{F}$ denotes the feature representations, $\pmb{W}^k$ and $\pmb{W}^v$ are the learnable weights for keys and values from $\{E^p,E^n\}$ , $M$ denotes the number of tokens, $D$ is the feature dimensionality, and $\mathbb{R}^{M\times D}$ the space of the feature matrix. The decoder outputs the density heatmap after up-sampling the fused features to the input image's dimensions: + +$$ +D _ {i} ^ {n} = \Gamma_ {\text {d e c o d e}} \left(\boldsymbol {F} _ {\text {f u s e}} ^ {n}\right), \quad D _ {i} ^ {p} = \Gamma_ {\text {d e c o d e}} \left(\boldsymbol {F} _ {\text {f u s e}} ^ {p}\right). \tag {9} +$$ + +Contrastive Learning and Loss Functions. The objective of the NSM in VA-Count is to reduce the impact of noise in images on counting performance while ensuring the accuracy of density map predictions. To achieve this, a contrastive loss $\mathcal{L}_C$ is proposed, using specified class density maps as positive samples and non-specified class density maps as negative samples. This involves maximizing the similarity between positive density maps and the ground-truth density maps and minimizing the similarity between negative density maps and the ground-truth density maps, as detailed in Eq. (10). To guide density map generation, we use the loss method from CounTR [19]. + +The density loss $\mathcal{L}_D$ is calculated as the mean squared error between each pixel of the density map $D_i^p$ generated for positive samples and the ground-truth density map $D_i^g$ , as shown in Eq. (11). $H$ and $W$ respectively denote the height and width of the density map. + +$$ +\mathcal {L} _ {C} \left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\right) = - \log \frac {\exp \sin \left(D ^ {p} , D ^ {g}\right)}{\exp \sin \left(D ^ {p} , D ^ {g}\right) + \exp \sin \left(D ^ {n} , D ^ {g}\right)}, \tag {10} +$$ + +$$ +\mathcal {L} _ {D} \left(D _ {i} ^ {p}, D _ {i} ^ {g}\right) = \frac {1}{H W} \sum \left\| D _ {i} ^ {p} - D _ {i} ^ {g} \right\| _ {2} ^ {2}, \tag {11} +$$ + +$$ +\mathcal {L} _ {\text {t o t a l}} \left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\right) = \mathcal {L} _ {C} + \mathcal {L} _ {D}. \tag {12} +$$ + +# 4 Experimental Result + +# 4.1 Datasets and Implementation Details + +Datasets. FSC-147 [10] dataset is tailored for class-agnostic counting with 6,135 images and 147 classes. Unique for its non-overlapping class subsets, it + +provides class labels and dot annotations for zero-shot counting using textual prompts. + +CARPK [11] dataset offers a bird's-eye view of 89,777 cars in 1,448 parking lot images, testing the method's cross-dataset transferability and adaptability. + +Evaluation Metrics. Following previous class-agnostic object counting methods [29], the evaluation metrics employed are Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). MAE is widely used to assess model accuracy, while RMSE evaluates model robustness. + +Exemplar Enhancement Module uses Grounding DINO $^7$ for bounding box proposals, setting the threshold $\tau_{l}$ to 0.02. For negative sample filtering, the IoU threshold $\tau_{\mathrm{iou}}$ is set to 0.5. The single object classifier employs CLIP ViT-B/16 $^8$ as its backbone, with an FFN comprising two linear layers, trained over 100 epochs at a learning rate of e-4. The dataset is partitioned in a 7:3 ratio + +Noise Suppression Module follows CounTR's [19] two-stage training: MAE pretraining and AdamW [25]-optimized fine-tuning. It is trained on FSC-147 with a learning rate of $10^{-5}$ , batch size of 8, on an NVIDIA RTX L40 GPU. + +# 4.2 Comparison with the State-of-the-Arts + +For the performance evaluation of our method, it is benchmarked against a variety of state-of-the-art few-shot and zero-shot counting methods on FSC-147. Additionally, we evaluate our method in comparison with class-specific counting models on CARPK. + +Quantitative Result on FSC-147. We evaluate the effectiveness of VA-Count on FSC-147, comparing it with state-of-the-art counting methods as detailed in Tab. 1. Our method surpasses the exemplar-discovery method ZSC [45], demonstrating that the exemplars found by VA-Count are of higher quality. VA-Count achieves the best performance in MAE and second in RMSE, validating our method's effectiveness. Despite being second in RMSE, it still outperforms ZSC. In comparison with CLIP-Count [13], VA-Count, due to some noise introduction, has a few inferior samples but, overall, surpasses CLIP-Count in performance. + +Quantitative Result on CARPK. In Tab. 2, VA-Count's cross-domain and non-cross-domain performance on CARPK are compared with previous methods. In the zero-shot group, VA-Count achieves the best performance, particularly with its cross-domain performance methoding that of the few-shot group, demonstrating its outstanding transferability. It is worth noting that employing $\varPhi(\cdot)$ significantly reduces errors compared to directly using the Grounding DINO [20] method. In the absence of any training data, VA-Count outperforms FamNet [33] in the cross-domain group. + +Ablation Study. We conduct both quantitative and qualitative analyses on the contributions of each component in our proposed VA-Count, which includes the Grounding-DINO candidate box extraction and filtering module. The quantitative outcomes are presented in Tab. 3. Using only Grounding DINO method + +Table 1: Quantitative results of our VA-Count and other state-of-the-art competitors on FSC-147. F-S, R-F, and Z-S are abbreviated for Few-shot, Reference-free, and Zero-shot settings. Best results for each scheme and the second-best results at the zero-shot setting are highlighted in bold and underline. + +
SchemeMethodVenueShotVal SetTest SetAvg
MAERMSEMAERMSEMAERMSE
F-SFamNet [33]CVPR'21324.3270.9422.56101.5423.4486.24
CFOCNet [46]WACV'21321.1961.4122.10112.7121.6587.06
CounTR [19]BMVC'22313.1349.8311.9591.2312.5470.53
LOCA [41]ICCV'23310.2432.5610.9756.9710.6144.77
SAM [36]WACV'243--19.95132.1619.95132.16
PseCo [12]CVPR'24315.3168.3413.05112.8614.1890.60
CACViT [42]AAAI'24310.6337.959.1348.969.8843.46
FamNet [33]CVPR'21126.0577.0126.76110.9526.4193.98
R-FFamNet [33]CVPR'21032.1598.7532.27131.4632.21115.11
RepRPN-C [32]ACCV'22029.2498.1126.66129.1127.95113.61
CounTR [19]BMVC'22018.0771.8414.71106.8716.3989.36
RCC [10]CVPR'23017.4958.8117.12104.5317.3181.67
LOCA [41]ICCV'23017.4354.9616.22103.9616.8379.46
Z-SZSC [45]CVPR'23026.9388.6322.09115.1724.51101.90
CLIP-Count [13]MM'23018.7961.1817.78106.6218.28583.90
PseCo [12]CVPR'24023.90100.3316.58129.7720.24115.05
VA-CountOurs017.8773.2217.88129.3117.87101.26
+ +(first row) achieves an error of 52.82 without training, which, although not as accurate as regression-based methods, ensures the detection of relevant objects. Performance improves slightly after adding a single-object classification filter (second row). With training based on $\mathcal{L}_D$ , it already meets counting requirements. In Tab. 2, we compare using Grounding DINO alone and with a single-object classification filter on CARPK (last three rows). Our binary classifier significantly improves performance, reducing MAE and RMSE by about 10. + +# 4.3 Qualitative Analysis + +Analysis of the zero-shot performance. To further ensure the effectiveness of the proposed VA-Count framework, we visualize qualitative results in Fig. 4. We provide a side-by-side comparison of the proposed VA-Count against the few-shot counting method [19]. VA-Count achieves a remarkable resemblance to the ground truth, showcasing the method's nuanced understanding of object boundaries and densities and being less affected by the background noise. Specifically, the first row shows there exists a golden egg drowned by white eggs. The few-shot method struggled with this nuanced differentiation, failing to recognize the golden egg distinctly. In the second row, strawberries near flowers also confound the few-shot + +Table 2: Quantitative results of our VA-Count and other state-of-the-art competitors on CARPK. $\varPhi(\cdot)$ denotes the single-object classification filter. C and F denote CARPK and FSC-147, respectively. + +
MethodsVenueShotC → CF → C
MAERMSEMAERMSE
FamNet [33]CVPR'21318.1933.6628.8444.47
GMN [26]CVPR'2137.489.90--
BMNet+ [35]CVPR'2235.767.8310.4413.77
CounTR [19]BMVC'2235.757.45--
RCC [10]CVPR'2309.2111.3321.3826.61
CLIP-Count [13]MM'230--11.9616.61
Grounding DINO [20]arXiv'24029.7231.6029.7231.60
Grounding DINO + Φ(·)Ours018.5421.7118.5421.71
VA-CountOurs08.7510.3010.6313.20
+ +Table 3: Ablation study on each component's contribution to the final results on FSC-147. We demonstrate the effectiveness of two parts of our framework and two types of loss: $G(\cdot)$ for Grounding DINO, $\varPhi(\cdot)$ for the single-object filtering section, the density loss $\mathcal{L}_D$ , and the contrastive loss $\mathcal{L}_C$ . + +
G(·)φ(·)LDLCVal SetTest Set
MAERMSEMAERMSE
52.82134.4954.48159.30
52.12135.2954.27159.76
19.6373.9418.93116.65
17.8773.2217.88129.31
+ +method. These examples emphasize VA-Count's superior ability to identify and differentiate between objects with minor differences. The third row presents a challenging scenario with dense keys partially occluded by hands. This situation tests the model's ability to count tiny, closely situated objects under partial occlusion, showcasing VA-Count's advanced capability to accurately identify and count such challenging objects, which is significantly better than the few-shot method. These results highlight the impact of exemplar selection and the incorporation of negative patches in VA-Count, significantly enhancing its object counting and localization capabilities, and showcasing its innovation in zero-shot object counting. + +Analysis of Positive and Negative Exemplars. To make our experiment more straightforward, we also conduct a qualitative analysis of the patch selection. As shown in Fig. 5 and Fig. 6, we illustrate selected positive and negative patches for various categories under a zero-shot setting. Taking a closer look at the positive patches for categories such as crab cakes and green peas, the results show a high degree of accuracy in the model's ability to isolate and highlight the regions + +![](images/1c22ff0b32e2acf775447d31e4fee0243f2bb657543e369a64bc9e81e7b23d7f.jpg) +Fig. 4: Illustration of heatmaps compared with few-shot method [19] on FSC-147. Predicted density map is overlaid on the original RGB image. (Best viewed in zoom in) + +![](images/0d246025080979b318b5e0ba1f9fec8f92f20ee187876a5bfe402eea4ee12e6f.jpg) +Fig. 5: Illustration of the positive (Pos.) and negative (Neg.) exemplars on FSC-147. + +containing the target objects. This precision underscores the effectiveness of VA-Count framework in discerning relevant features amidst complex backgrounds, affirming its robustness in the exemplar discovery. Negative patches, especially from categories like strawberries and crab cakes, highlight the model's challenges with visually similar or overlapping areas not in the target category, underscoring the need for improved discriminative abilities. This analysis underscores our paper's impact on zero-shot object counting and the importance of refining visual learning and exemplar selection for future advancements. + +Effective of the object exemplar filter. The effectiveness of the object exemplar filter is further evaluated by comparing visualization grounding results with and without the filter. Fig. 7 illustrates this comparison for the category of cars on CARPK. Images without the filter show multiple cars within a single + +![](images/334ca83d047f24e24263af78e29ddb2f2bdca53ef285741f6b906daddddca24f.jpg) +Pos. + +![](images/64e0ddca09e4fc9b8f2d65940b5aa0dee89f59057a1abdfa9c71184970ca651e.jpg) +Fig. 6: Illustration of the final positive (Pos.) and negative (Neg.) exemplars for images on CARPK. + +![](images/b9507ee3d3fee208ef7dbd4e764d2797cea354905eb16a6fa4668b87b7d3320c.jpg) +Pos. + +![](images/764fe6604ec99ed48edb1936e1211935050102981caf9145a9ffbaf6ef4139d3.jpg) + +![](images/9eba14f7ed8cb60de40d8c79a2ece9b8e62a41c3f0bca7c9f06b48782abf4ed4.jpg) +Fig. 7: Illustration of candidate boxes before and after exemplar filter for images on CARPK. + +![](images/ef39bb910e527418d63f3ccf849fa6cfb8c1d02b9a605299bdd35f823d0adf06.jpg) + +![](images/6ea60153dbf21e8f0398f553ecb771c32942188b4fd53e3ba739bb3726f61544.jpg) + +bounding box, indicating Grounding DINO's [20] inability to isolate individual objects effectively. Conversely, images with the filter applied demonstrate a significant improvement, with bounding boxes accurately encompassing single cars. This clear distinction highlights the binary classifier's crucial role in ensuring precise object counting by enforcing the single-object criterion within each exemplar, validating the filter's contribution to enhancing the model's accuracy and reliability in VA-Count framework. + +# 5 Conclusion + +This paper addresses the challenges in class-agnostic object counting by introducing the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count effectively balances the need for scalability across arbitrary classes with the establishment of robust visual connections, overcoming the limitations of existing Zero-shot Object Counting (ZOC) methods. VA-Count comprises an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively. The EEM utilizes advanced Vision-Language Pre-taining models like Grounding DINO for scalable exemplar discovery, while the NSM mitigates the impact of erroneous exemplars through contrastive learning. VA-Count shows promise in zero-shot counting, performing well on three datasets and offering precise visual associations and scalability. In the future, we will explore and better utilize advanced visual language models. + +# Acknowledgments + +This work was supported in part by the National Natural Science Foundation of China under Grant 62271361, the Sanya Yazhou Bay Science and Technology City Administration scientific research project under Grant 2022KF0021, the Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2023B1515020097, and the National Research Foundation Singapore under the AI Singapore Programme under Grant AISG3-GV-2023-011. + +# References + +1. Arteta, C., Lempitsky, V.S., Zisserman, A.: Counting in the wild. In: Proc. Eur. Conf. Comput. Vis. pp. 483-498 (2016) +2. Bai, Y., Cao, M., Gao, D., Cao, Z., Chen, C., Fan, Z., Nie, L., Zhang, M.: RaSa: Relation and sensitivity aware representation learning for text-based person search. In: Proc. Int. Joint Conf. Artif. Intell. pp. 555-563 (2023) +3. Bansal, A., Sikka, K., Sharma, G., Chellappa, R., Divakaran, A.: Zero-shot object detection. In: Proc. Eur. Conf. Comput. Vis. pp. 397-414 (2018) +4. Chai, L., Liu, Y., Liu, W., Han, G., He, S.: CrowdGAN: Identity-free interactive crowd video generation and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 44(6), 2856-2871 (2022) +5. Chen, C., Ye, M., Jiang, D.: Towards modality-agnostic person re-identification with descriptive query. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15128-15137 (2023) +6. Dou, Z., Kamath, A., Gan, Z., Zhang, P., Wang, J., Li, L., Liu, Z., Liu, C., LeCun, Y., Peng, N., Gao, J., Wang, L.: Coarse-to-fine vision-language pre-training with fusion in the backbone. In: Adv. Neural Inf. Process. Syst. pp. 32942-32956 (2022) +7. Du, Y., Wei, F., Zhang, Z., Shi, M., Gao, Y., Li, G.: Learning to prompt for open-vocabulary object detection with vision-language model. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 14084-14093 (2022) +8. Gong, S., Zhang, S., Yang, J., Dai, D., Schiele, B.: Class-agnostic object counting robust to intraclass diversity. In: Proc. Eur. Conf. Comput. Vis. pp. 388-403 (2022) +9. He, S., Chen, W., Wang, K., Luo, H., Wang, F., Jiang, W., Ding, H.: Region generation and assessment network for occluded person re-identification. IEEE Trans. Inf. Forensics Secur. 19, 120–132 (2023) +0. Hobley, M., Prisacariu, V.: Learning to count anything: Reference-less class-agnostic counting with weak supervision. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (2023) +1. Hsieh, M., Lin, Y., Hsu, W.H.: Drone-based object counting by spatially regularized regional proposal network. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 4165-4173 (2017) +2. Huang, Z., Dai, M., Zhang, Y., Zhang, J., Shan, H.: Point, segment and count: A generalized framework for object counting. arXiv:2311.12386 (2023) +3. Jiang, R., Liu, L., Chen, C.: CLIP-Count: Towards text-guided zero-shot object counting. In: Proc. ACM Multimedia. pp. 4535-4545 (2023) +4. Kang, S., Moon, W., Kim, E., Heo, J.: VLCounter: Text-aware visual representation for zero-shot object counting. In: Proc. AAAI Conf. Artif. Intell. pp. 2714-2722 (2024) + +15. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W., Dollár, P., Girshick, R.B.: Segment anything. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 3992-4003 (2023) +16. Li, J., Li, D., Savarese, S., Hoi, S.C.H.: BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In: Proc. Int. Conf. Mach. Learn. pp. 19730-19742 (2023) +17. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Proc. Int. Conf. Mach. Learn. pp. 12888-12900 (2022) +18. Li, S., Sun, L., Li, Q.: CLIP-ReID: Exploiting vision-language model for image re-identification without concrete text labels. In: Proc. AAAI Conf. Artif. Intell. pp. 1405-1413 (2023) +19. Liu, C., Zhong, Y., Zisserman, A., Xie, W.: CounTR: Transformer-based generalised visual counting. In: Proc. Brit. Mach. Vis. Conf. p. 370 (2022) +20. Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., Zhang, L.: Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection. arXiv:2303.05499 (2023) +21. Liu, X., Yang, J., Ding, W., Wang, T., Wang, Z., Xiong, J.: Adaptive mixture regression network with local counting map for crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 241-257 (2020) +22. Liu, Y., Ren, S., Chai, L., Wu, H., Xu, D., Qin, J., He, S.: Reducing spatial labeling redundancy for active semi-supervised crowd counting. IEEE Trans. Pattern Anal. Mach. Intell. 45(7), 9248-9255 (2023) +23. Liu, Y., Wen, Q., Chen, H., Liu, W., Qin, J., Han, G., He, S.: Crowd counting via cross-stage refinement networks. IEEE Trans. Image Process. 29, 6800-6812 (2020) +24. Liu, Y., Xu, D., Ren, S., Wu, H., Cai, H., He, S.: Fine-grained domain adaptive crowd counting via point-derived segmentation. In: Proc. IEEE Int. Conf. Multimedia Expo. pp. 2363-2368 (2023) +25. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proc. Int. Conf. Learn. Represent. (2019) +26. Lu, E., Xie, W., Zisserman, A.: Class-agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 669-684 (2019) +27. Ming, Y., Cai, Z., Gu, J., Sun, Y., Li, W., Li, Y.: Delving into out-of-distribution detection with vision-language representations. In: Adv. Neural Inf. Process. Syst. pp. 35087-35102 (2022) +28. Mundhenk, T.N., Konjevod, G., Sakla, W.A., Boakye, K.: A large contextual dataset for classification, detection and counting of cars with deep learning. In: Proc. Eur. Conf. Comput. Vis. pp. 785-800 (2016) +29. Nguyen, T., Pham, C., Nguyen, K., Hoai, M.: Few-shot object counting and detection. In: Proc. Eur. Conf. Comput. Vis. pp. 348-365 (2022) +30. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Proc. Int. Conf. Mach. Learn. pp. 8748-8763 (2021) +31. Ranjan, V., Le, H.M., Hoai, M.: Iterative crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 278-293 (2018) +32. Ranjan, V., Nguyen, M.H.: Exemplar free class agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 71-87 (2022) +33. Ranjan, V., Sharma, U., Nguyen, T., Hoai, M.: Learning to count everything. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 3394-3403 (2021) + +34. Sam, D.B., Agarwalla, A., Joseph, J., Sindagi, V.A., Babu, R.V., Patel, V.M.: Completely self-supervised crowd counting via distribution matching. In: Proc. Eur. Conf. Comput. Vis. pp. 186-204 (2022) +35. Shi, M., Lu, H., Feng, C., Liu, C., Cao, Z.: Represent, compare, and learn: A similarity-aware framework for class-agnostic counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 9529–9538 (2022) +36. Shi, Z., Sun, Y., Zhang, M.: Training-free object counting with prompts. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 323-331 (2024) +37. Song, S., Wan, J., Yang, Z., Tang, J., Cheng, W., Bai, X., Yao, C.: Vision-language pre-training for boosting scene text detectors. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15681-15691 (2022) +38. Sun, G., An, Z., Liu, Y., Liu, C., Sakaridis, C., Fan, D., Van Gool, L.: Indiscernible object counting in underwater scenes. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 13791-13801 (2023) +39. Tian, C., Zhang, X., Liang, X., Li, B., Sun, Y., Zhang, S.: Knowledge distillation with fast CNN for license plate detection. IEEE Trans. Intell. Transp. Syst. (2023) +40. Tyagi, A.K., Mohapatra, C., Das, P., Makharia, G., Mehra, L., AP, P., Mausam: DeGPR: Deep guided posterior regularization for multi-class cell detection and counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 23913-23923 (2023) +41. Dukic, N., Lukezic, A., Zavrtanik, V., Kristan, M.: A low-shot object counting network with iterative prototype adaptation. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 18872-18881 (2023) +42. Wang, Z., Xiao, L., Cao, Z., Lu, H.: Vision transformer off-the-shelf: A surprising baseline for few-shot class-agnostic counting. In: Proc. AAAI Conf. Artif. Intell. pp. 5832-5840 (2024) +43. Xie, D., Liu, L., Zhang, S., Tian, J.: A unified multi-modal structure for retrieving tracked vehicles through natural language descriptions. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops. pp. 5418-5426 (2023) +44. Xiong, Z., Chai, L., Liu, W., Liu, Y., Ren, S., He, S.: Glance to count: Learning to rank with anchors for weakly-supervised crowd counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 342-351 (2024) +45. Xu, J., Le, H., Nguyen, V., Ranjan, V., Samaras, D.: Zero-shot object counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15548-15557 (2023) +46. Yang, S., Su, H., Hsu, W.H., Chen, W.: Class-agnostic few-shot object counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 869-877 (2021) +47. You, Z., Yang, K., Luo, W., Lu, X., Cui, L., Le, X.: Few-shot object counting with similarity-aware feature enhancement. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 6304-6313 (2023) +48. Zhang, Z., Liu, K., Gao, F., Li, X., Wang, G.: Vision-based vehicle detecting and counting for traffic flow analysis. In: Proc. IEEE Int. Joint Conf. Neural Networks. pp. 2267-2273 (2016) +49. Zheng, Y., Wu, J., Qin, Y., Zhang, F., Cui, L.: Zero-shot instance segmentation. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 2593-2602 (2021) +50. Zhu, H., Yuan, J., Zhong, X., Liao, L., Wang, Z.: Find gold in sand: Fine-grained similarity mining for domain-adaptive crowd counting. IEEE Trans. Multimedia 26, 3842-3855 (2024) +51. Zhu, H., Yuan, J., Zhong, X., Yang, Z., Wang, Z., He, S.: DAOT: Domain-agnostically aligned optimal transport for domain-adaptive crowd counting. In: Proc. ACM Multimedia. pp. 4319-4329 (2023) \ No newline at end of file diff --git a/2024/Zero-shot Object Counting with Good Exemplars/images.zip b/2024/Zero-shot Object Counting with Good Exemplars/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..996a9e710bc459da6bb9a559bef295371206e99f --- /dev/null +++ b/2024/Zero-shot Object Counting with Good Exemplars/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:475b026bafa3026004c59ce86a7bdb0ecf8b82ae868f19a98673c5e2b3715798 +size 683640 diff --git a/2024/Zero-shot Object Counting with Good Exemplars/layout.json b/2024/Zero-shot Object Counting with Good Exemplars/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4d7f99042ce8d8311f16d546baaea8a1c7402b20 --- /dev/null +++ b/2024/Zero-shot Object Counting with Good Exemplars/layout.json @@ -0,0 +1,10024 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 133, + 112, + 479, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 112, + 479, + 129 + ], + "spans": [ + { + "bbox": [ + 133, + 112, + 479, + 129 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "spans": [ + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": "Huilin Zhu" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{1,2,3,\\dagger}" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": ", Jingling Yuan" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{1,2,\\dagger}" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": ", Zhengwei Yang" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{4,\\dagger}" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": ", Yu Guo" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{3,5}" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": ", Zheng Wang" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": ", Xian Zhong" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{1,2,6(\\text{四})}" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "text", + "content": ", and Shengfeng He" + }, + { + "bbox": [ + 147, + 149, + 467, + 175 + ], + "type": "inline_equation", + "content": "^{3(\\text{四})}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 138, + 183, + 475, + 217 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 138, + 183, + 474, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 183, + 474, + 196 + ], + "spans": [ + { + "bbox": [ + 138, + 183, + 474, + 196 + ], + "type": "text", + "content": "1 Sanya Science and Education Innovation Park, Wuhan University of Technology" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 196, + 475, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 196, + 475, + 217 + ], + "spans": [ + { + "bbox": [ + 138, + 196, + 475, + 217 + ], + "type": "text", + "content": "2 Hubei Key Laboratory of Transportation Internet of Things, School of Computer Science and Artificial Intelligence, Wuhan University of Technology" + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 262, + 218, + 351, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 262, + 218, + 351, + 228 + ], + "spans": [ + { + "bbox": [ + 262, + 218, + 351, + 228 + ], + "type": "text", + "content": "zhongx@whut.edu.cn" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 134, + 228, + 478, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 228, + 478, + 250 + ], + "spans": [ + { + "bbox": [ + 134, + 228, + 478, + 250 + ], + "type": "text", + "content": "3 School of Computing and Information Systems, Singapore Management University shengfenghe@smu.edu.sg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 192, + 250, + 422, + 283 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 205, + 250, + 407, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 250, + 407, + 261 + ], + "spans": [ + { + "bbox": [ + 205, + 250, + 407, + 261 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 205, + 250, + 407, + 261 + ], + "type": "text", + "content": " School of Computer Science, Wuhan University" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 192, + 261, + 422, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 261, + 422, + 272 + ], + "spans": [ + { + "bbox": [ + 192, + 261, + 422, + 272 + ], + "type": "text", + "content": "5 School of Navigation, Wuhan University of Technology" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 205, + 272, + 408, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 272, + 408, + 283 + ], + "spans": [ + { + "bbox": [ + 205, + 272, + 408, + 283 + ], + "type": "inline_equation", + "content": "^{6}" + }, + { + "bbox": [ + 205, + 272, + 408, + 283 + ], + "type": "text", + "content": " ROSE@EEE, Nanyang Technological University" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 267, + 283, + 350, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 283, + 350, + 293 + ], + "spans": [ + { + "bbox": [ + 267, + 283, + 350, + 293 + ], + "type": "text", + "content": "Equal Contribution" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 218, + 294, + 394, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 218, + 294, + 394, + 304 + ], + "spans": [ + { + "bbox": [ + 218, + 294, + 394, + 304 + ], + "type": "text", + "content": "https://github.com/HopooLinZ/VA-Count" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 159, + 330, + 455, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 330, + 455, + 529 + ], + "spans": [ + { + "bbox": [ + 159, + 330, + 455, + 529 + ], + "type": "text", + "content": "Abstract. Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations. However, a critical challenge in current ZOC methods lies in their inability to identify high-quality exemplars effectively. This deficiency hampers scalability across diverse classes and undermines the development of strong visual associations between the identified classes and image content. To this end, we propose the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count consists of an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM) that synergistically refine the process of class exemplar identification while minimizing the consequences of incorrect object identification. The EEM utilizes advanced vision-language pre-taining models to discover potential exemplars, ensuring the framework's adaptability to various classes. Meanwhile, the NSM employs contrastive learning to differentiate between optimal and suboptimal exemplar pairs, reducing the negative effects of erroneous exemplars. VA-Count demonstrates its effectiveness and scalability in zero-shot contexts with superior performance on two object counting datasets." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 546, + 230, + 558 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 546, + 230, + 558 + ], + "spans": [ + { + "bbox": [ + 132, + 546, + 230, + 558 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "content": "In visual monitoring applications, object counting plays a critical role in analyzing images or videos. Traditional methods focus on high precision within predefined object categories, such as crowds [4, 23], vehicles, and cells [1, 34, 39, 40, 44]. Yet, these methods are limited to specific categories, lacking the flexibility to adapt to new, unseen classes. To address these challenges, class-agnostic methods have been developed for scenarios with unseen classes. These methods, including few-shot, reference-free, and zero-shot object counting [12, 32, 35, 46, 47], provide varying levels of independence from predefined object classes." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 116, + 474, + 293 + ], + "blocks": [ + { + "bbox": [ + 134, + 116, + 474, + 293 + ], + "lines": [ + { + "bbox": [ + 134, + 116, + 474, + 293 + ], + "spans": [ + { + "bbox": [ + 134, + 116, + 474, + 293 + ], + "type": "image", + "image_path": "5711ecdb9fded11199d37d21250d794eee6570aa10a0f84b2e75684181b3e47e.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 303, + 483, + 381 + ], + "lines": [ + { + "bbox": [ + 130, + 303, + 483, + 381 + ], + "spans": [ + { + "bbox": [ + 130, + 303, + 483, + 381 + ], + "type": "text", + "content": "Fig. 1: Illustration of class-agnostic object counting methods. (a) Few-shot uses limited annotations for counting. (b) Reference-free quantifies objects without annotations. (c) Zero-shot counts specific classes without annotations, further divided into: (c1) Image-text association, leveraging direct image-text correlations. (c2) Class-related exemplar search, using prototypes to link classes with images. (c3) Our method introduces a detection-driven exemplar discovery to harmonize text with visual representations, distinguishing it from prior methods." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 411, + 482, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 411, + 482, + 568 + ], + "spans": [ + { + "bbox": [ + 130, + 411, + 482, + 568 + ], + "type": "text", + "content": "In this context, different strategies are adopted for object counting under varying constraints, as illustrated in Fig. 1. Few-shot counting methods [29,46,47], depicted in Fig. 1(a), method the task as a matching problem, using a small number of annotated bounding boxes to identify and count objects throughout the image. While effective, this method requires fine-tuning with annotations from novel classes, limiting its scalability in real-world surveillance settings due to the sparse availability of annotated bounding boxes. To circumvent the limitations of bounding box annotations, reference-free counting methods are developed [10,19,32,41], as shown in Fig. 1(b). These methods aim to ascertain the total number of objects in an image without relying on specific cues. Nevertheless, the lack of specificity in counting categories makes these methods prone to errors induced by background noise, as they indiscriminately count all visible objects, leading to a lack of control in the counting process." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "content": "In pursuit of more scalable and realistic counting solutions, zero-shot methods [3, 45, 49], illustrated in Fig. 1(c), are introduced. These techniques are designed to count objects from specified classes within an image without prior annotations for those classes, addressing the limitations of both few-shot and reference-free methods by providing enhanced specificity and scalability. These methods can be categorized into two streams. The initial method [13, 14] leans on image-text alignment to comprehend object-related correlations without needing physical exemplars. This method enhances scalability for unidentified classes but" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 224 + ], + "type": "text", + "content": "struggles with adequately representing image details for target classes, especially those with atypical shapes, as demonstrated in Fig. 1(c1). Conversely, the second method [45] concentrates on identifying objects through the discovery of class-relevant exemplars. This is achieved by creating pseudo labels that assess the resemblance between image patches and class-generated prototypes. Nevertheless, this method's reliance on arbitrary patch selection hampers its ability to accurately outline entire objects. Additionally, the absence of direct text-image engagement restricts its scalability, tethered to the pre-defined categories present in the training dataset, as illustrated in Fig. 1(c2)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 224, + 482, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 224, + 482, + 368 + ], + "spans": [ + { + "bbox": [ + 130, + 224, + 482, + 368 + ], + "type": "text", + "content": "As shown in Fig. 1(c3), we introduce the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count aims to create a robust link between specific object categories and their corresponding visual representations, ensuring adaptability to various classes. This framework is anchored by three core principles. First, it prioritizes flexibility and scalability, enabling adaptation to novel classes beyond its initial parameters. Second, it enhances precision in identifying exemplary objects, strengthening the connection between visual depictions and their categories. Third, it devises strategies to reduce the effects of localization errors on counting precision. Building on these principles, VA-Count integrates an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 368, + 482, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 368, + 482, + 488 + ], + "spans": [ + { + "bbox": [ + 130, + 368, + 482, + 488 + ], + "type": "text", + "content": "In detail, the EEM expands VA-Count's capacity to handle various classes through the integration of Vision-Language Pretaining (VLP) models, such as Grounding DINO [20]. These VLP models, trained on extensive datasets, excel in identifying a wide range of classes by defining specific categories. In the context of ZOC, it is essential to select exemplars that each contain precisely one object from among the potential bounding boxes that might encompass varying object quantities. To this end, we deploy a binary filter aimed at rigorously refining the set of candidate exemplars, excluding those that fail to comply with the single-object requirement. This filtration step is pivotal for ensuring the precision and consistency necessary for ZOC." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 488, + 482, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 488, + 482, + 619 + ], + "spans": [ + { + "bbox": [ + 130, + 488, + 482, + 619 + ], + "type": "text", + "content": "Moreover, even when potential exemplars accurately represent single objects, the unintentional inclusion of exemplars not pertaining to the target category poses a persistent problem. This misalignment introduces uncertainty into the learning process that associates exemplars with images. To counteract this issue, the NSM module operates as a safeguard by identifying negative exemplars, which are unrelated to the intended category. Contrasting with the EEM, which focuses on selecting ideal samples to foster visual connections with images, the NSM employs samples from irrelevant classes to build these associations, utilizing contrastive learning to differentiate between them. This method of contrastive learning acts as a rectifying mechanism, markedly improving the accuracy and efficiency of the associative learning framework." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 146, + 620, + 345, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 620, + 345, + 632 + ], + "spans": [ + { + "bbox": [ + 146, + 620, + 345, + 632 + ], + "type": "text", + "content": "In summary, our contributions are threefold:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 641, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 641, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 138, + 641, + 482, + 666 + ], + "type": "text", + "content": "- We introduce a Visual Association-based Zero-shot Object Counting framework, which facilitates high-quality exemplar identification for any class" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 147, + 116, + 480, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 116, + 480, + 140 + ], + "spans": [ + { + "bbox": [ + 147, + 116, + 480, + 140 + ], + "type": "text", + "content": "without needing annotated examples and forges robust visual connections between objects and images." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 140, + 481, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 140, + 481, + 187 + ], + "spans": [ + { + "bbox": [ + 138, + 140, + 481, + 187 + ], + "type": "text", + "content": "- We propose an exemplar enhancement model leveraging the universal class-agnostic detection capabilities of the Vision-Language Pretaining model for precise exemplar selection, and a Noise Suppression Module to minimize the adverse effects of incorrect samples in visual associative learning." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 189, + 481, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 189, + 481, + 224 + ], + "spans": [ + { + "bbox": [ + 138, + 189, + 481, + 224 + ], + "type": "text", + "content": "- Extensive experiments conducted on two object counting datasets demonstrate the state-of-the-art accuracy and generalizability of VA-Count, underscoring its notable scalability." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 243, + 237, + 255 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 243, + 237, + 255 + ], + "spans": [ + { + "bbox": [ + 132, + 243, + 237, + 255 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 270, + 317, + 282 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 270, + 317, + 282 + ], + "spans": [ + { + "bbox": [ + 132, + 270, + 317, + 282 + ], + "type": "text", + "content": "2.1 Class-Specific Object Counting" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 290, + 482, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 290, + 482, + 399 + ], + "spans": [ + { + "bbox": [ + 130, + 290, + 482, + 399 + ], + "type": "text", + "content": "Object counting plays a crucial role in public safety, public administration, and the liberation of human labor. Currently, class-specific object counting [22,32, 35,46,47] is the predominant method, which entails identifying specific object categories (such as humans [21,24,31,50,51], vehicles [28,48], fishes [38], cells [40], etc.) leveraging object detection or density estimation and counting accordingly. While these methods show excellence within close-set scenarios with a fixed number of categories, transferring them to arbitrary categories poses challenges. Introducing novel categories necessitates retraining or fine-tuning a counting model with new data, which limits their applicability in real scenarios." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 417, + 323, + 430 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 417, + 323, + 430 + ], + "spans": [ + { + "bbox": [ + 132, + 417, + 323, + 430 + ], + "type": "text", + "content": "2.2 Class-Agnostic Object Counting" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 437, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 437, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 437, + 482, + 666 + ], + "type": "text", + "content": "Class-agnostic object counting [8, 26, 29, 36, 42] is proposed for scenarios with less data, which can be divided into few-shot and zero-shot depending on the annotation usage. Specifically, GMN [26] initially frames the class-agnostic counting task as a matching task, leading to FamNet [33], which implements ROI Pooling for broad applicability across FSC-147. As multi-class datasets emerged, the focus shifts towards few-shot methods, where LOCA [41] enhances feature representation and exemplar adaptation; and CounTR [19] utilizes transformers for scalable counting with a two-stage training model. BMNet [?] innovates with a bilinear matching network for refined object similarity assessments. In the realm of zero-shot methods, which are categorized into two types, methods like ZSC [45] leverage textual inputs to generate prototypes and filter image patches, thus reducing the need for extensive labeling, albeit with fixed generators that limit scalability. CLIP-Count [13] employs CLIP to encode text and images separately, establishing semantic associations crucial for intuitive counting. VL-Count [14] takes this further by enhancing CLIP's text-image association learning specifically for object counting. Additionally, PseCo [12] introduces a SAM-based multi-task framework that achieves segmentation, dot mapping, and detection on counting data, offering broad application prospects but also necessitating greater computational resources." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 136, + 116, + 479, + 238 + ], + "blocks": [ + { + "bbox": [ + 136, + 116, + 479, + 238 + ], + "lines": [ + { + "bbox": [ + 136, + 116, + 479, + 238 + ], + "spans": [ + { + "bbox": [ + 136, + 116, + 479, + 238 + ], + "type": "image", + "image_path": "7deda26ca26686abed708e110485281bf583700896371b1c67045eaac55f7beb.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 258, + 482, + 326 + ], + "lines": [ + { + "bbox": [ + 130, + 258, + 482, + 326 + ], + "spans": [ + { + "bbox": [ + 130, + 258, + 482, + 326 + ], + "type": "text", + "content": "Fig. 2: Overview of the proposed method. Proposed method focuses on two main elements: the Exemplar Enhancement Module (EEM) for improving exemplar quality through a patch selection integrated with Grounding DINO [20], and the Noise Suppression Module (NSM) that distinguishes between positive and negative class samples using density maps. It employs a Contrastive Loss function to refine the precision in identifying target class objects from others in an image." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 349, + 336, + 361 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 349, + 336, + 361 + ], + "spans": [ + { + "bbox": [ + 132, + 349, + 336, + 361 + ], + "type": "text", + "content": "2.3 Vision-Language Pretaining Model" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 370, + 482, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 370, + 482, + 538 + ], + "spans": [ + { + "bbox": [ + 130, + 370, + 482, + 538 + ], + "type": "text", + "content": "In recent years, Vision-Language Pretaining (VLP) methods have proven pivotal in enhancing scene understanding and representation learning capabilities. Their adaptability makes them applicable across a wide range of downstream tasks [2,5-7,9,18,27,37,43]. CLIP [30] segregates vision and language features, aligning them through contrastive learning. BLIP [17] introduces a multimodal mixture of encoders and decoders to align different modalities. Building upon this, BLIP2 [16] combines specialized vision and language models to enhance multimodal understanding capabilities through bootstrapping. Grounding DINO [20] incorporates language into close-set detection, improving generalization for open-set detection. The Segment Anything Model (SAM) [15] is based on a prompt-based segmentation task, allowing flexible prompts for zero-shot capabilities across diverse tasks. VLP models, known for their robust multimodal comprehension and scene understanding, significantly advance deep learning and facilitate learning of unknown classes." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 557, + 261, + 571 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 557, + 261, + 571 + ], + "spans": [ + { + "bbox": [ + 132, + 557, + 261, + 571 + ], + "type": "text", + "content": "3 Proposed Method" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 584, + 256, + 594 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 584, + 256, + 594 + ], + "spans": [ + { + "bbox": [ + 132, + 584, + 256, + 594 + ], + "type": "text", + "content": "3.1 Formula Definition" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": "As shown in Fig. 2, we introduce a Visual Association-based Zero-shot Object Counting framework (VA-Count) focusing on zero-shot, class-agnostic object counting. The categories among the training set " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "C_{\\mathrm{train}}" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": ", validation set " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "C_{\\mathrm{val}}" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": ", and testing set " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "C_{\\mathrm{test}}" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " are distinguished, ensuring no overlap among them (" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "C_{\\mathrm{train}} \\cap C_{\\mathrm{val}} \\cap C_{\\mathrm{test}} = \\emptyset" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": "). VA-Count generates density maps " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " from input images " + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 605, + 482, + 666 + ], + "type": "text", + "content": " for" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 132, + 129, + 482, + 398 + ], + "blocks": [ + { + "bbox": [ + 132, + 115, + 454, + 128 + ], + "lines": [ + { + "bbox": [ + 132, + 115, + 454, + 128 + ], + "spans": [ + { + "bbox": [ + 132, + 115, + 454, + 128 + ], + "type": "text", + "content": "Algorithm 1 Grounding DINO-Guided Exemplar Enhancement Module" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "lines": [ + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "spans": [ + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": "1: I: Input image \n2: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "T^p" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Positive text label (\\{specific class\\}), " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "T^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Negative text label (\"object\") \n3: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "B^p" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Bounding boxes for positive samples, " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "S^p" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Logits for positive samples \n4: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "B^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Bounding boxes for negative samples, " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "S^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Logits for negative samples \n5: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\tau_l" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Logits threshold, " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\tau_{\\mathrm{iou}}" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": IoU threshold \n6: M(\\cdot): Single Object Classifier \n7: Input: I, " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "T^p" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "T^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " \n8: Output: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\mathcal{O}^p = \\{(B^p, S^p)\\}" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Positive outputs, " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\mathcal{O}^n = \\{(B^n, S^n)\\}" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ": Negative outputs \n9: Grounding DINO Process: \n10: F ← ExtractFeatures(I) \n11: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "S^p, B^p \\gets \\text{Detect}(F, T^p)" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ", filter by " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\tau_l" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": "; and " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "S^n, B^n \\gets \\text{Detect}(F, T^n)" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ", filter by " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\tau_l" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " \n12: Dedduplication and Filtering: \n13: Initialize " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "B_{\\text{filtered}}^n, B_{\\text{new}}^p, B_{\\text{new}}^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " \n14: for " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "b^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " in " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "B^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " do ▷ Remove duplicates \n15: if " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "b^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " is unique in " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "B^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " with IoU < " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\tau_{\\mathrm{iou}}" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " then \n16: " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "B_{\\text{filtered}}^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": ".append" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "(b^n)" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " \n17: end if \n18: end for \n19: for all " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "b \\in B^p \\cup B_{\\text{filtered}}^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " do ▷ Single object filter \n20: if " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "M(b)" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " is true then \n21: Add " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "b" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " to the appropriate new set \n22: end if \n23: end for \n24: Update " + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "inline_equation", + "content": "\\mathcal{O}^p, \\mathcal{O}^n" + }, + { + "bbox": [ + 132, + 129, + 482, + 398 + ], + "type": "text", + "content": " with new sets" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "code_body" + } + ], + "index": 3, + "sub_type": "algorithm" + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "spans": [ + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "text", + "content": "any given class " + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "text", + "content": ", and counts objects using these density maps. Specifically, VA-Count utilizes pseudo-exemplars " + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "inline_equation", + "content": "E^p" + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "text", + "content": " to enhance image-text associations, acting as a bridge to establish robust visual correlations between " + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "inline_equation", + "content": "E^p" + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "text", + "content": " and the images " + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 428, + 483, + 501 + ], + "type": "text", + "content": ". To extract exemplars from images, we propose the use of two key modules: the Exemplar Enhancement Module (EEM) (cf. Sec. 3.2) and the Noise Suppression Module (NSM) (cf. Sec. 3.3)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "spans": [ + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": "To alleviate the noise introduced by objects belonging to other classes on the target objects within images, the EEM and NSM are simultaneously used to obtain positive exemplars " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "B^{p}" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " and negative exemplars " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "B^{p}" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": ". The EEM consists of Grounding DINO " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "G(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " and a filtering module " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\varPhi(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": ". There are different filtering modules for positive and negative samples " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\varPhi^{p}(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\varPhi^{n}(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " respectively. " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\varPhi^{p}(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " is a binary classifier, while " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\varPhi^{n}(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " consists of a binary classifier and a dedduplication module. The two kinds of pseudo-exemplars and images are then fed into the Counter " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\Gamma(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " simultaneously for correlation learning. " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\Gamma(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " comprises an image encoder, correlation module, and decoder. The optimization goal of this paper is as follows, where " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "\\mu(\\cdot)" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " denotes the similarity, and " + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "inline_equation", + "content": "D^{p}, D^{n}, D^{g}" + }, + { + "bbox": [ + 130, + 504, + 483, + 636 + ], + "type": "text", + "content": " represent the density maps for positive, negative, and ground truth respectively:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 190, + 652, + 482, + 667 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 652, + 482, + 667 + ], + "spans": [ + { + "bbox": [ + 190, + 652, + 482, + 667 + ], + "type": "interline_equation", + "content": "D ^ {p} = \\Gamma \\left(\\Phi^ {p} \\left(G \\left(I, T ^ {p}\\right)\\right)\\right), \\quad D ^ {n} = \\Gamma \\left(\\Phi^ {n} \\left(G \\left(I, T ^ {n}\\right)\\right)\\right), \\tag {1}", + "image_path": "c46175ac4596ce6fb3241c89906c036434991b4671ce066b7e331db299cad8ca.jpg" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 235, + 125, + 482, + 159 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 235, + 125, + 482, + 159 + ], + "spans": [ + { + "bbox": [ + 235, + 125, + 482, + 159 + ], + "type": "interline_equation", + "content": "\\text {O b j e c t i v e} = \\left\\{ \\begin{array}{l} \\max \\mu \\left(D ^ {p}, D ^ {g}\\right), \\\\ \\min \\mu \\left(D ^ {n}, D ^ {g}\\right). \\end{array} \\right. \\tag {2}", + "image_path": "9153266103424ddf32510dc44b06292384dd781d52cf484dcac044cde89a6d0d.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 172, + 324, + 185 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 172, + 324, + 185 + ], + "spans": [ + { + "bbox": [ + 132, + 172, + 324, + 185 + ], + "type": "text", + "content": "3.2 Exemplar Enhancement Module" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 191, + 482, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 191, + 482, + 289 + ], + "spans": [ + { + "bbox": [ + 130, + 191, + 482, + 289 + ], + "type": "text", + "content": "We introduce an Exemplar Enhancement Module (EEM) for detecting objects within images and refining the detected objects as target exemplars. The workflow of the EEM is outlined in Algorithm 1. The EEM ensures VA-Count's scalability to arbitrary classes by incorporating Vision-Language Pretaining (VLP) models (e.g., Grounding DINO [20]) for potential exemplar discovery, renowned for its efficiency in feature extraction and precision in object localization. Furthermore, the EEM involves meticulously discovering and refining potential exemplars to enhance the quality of positive and negative exemplars for precise object counting." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": "Grounding DINO-Guided Box Selection. Given the training set input image " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "I_{i}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": ", accompanied by predefined sets of positive text labels " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "T_{i}^{p} = \\{C_{i}\\}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " and negative text labels " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "T_{i}^{n} = \\text{\"object\"}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "C_i" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " represents the specified target class for the input image and " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "T_{i}^{n}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " is fixed as \"object\". These labels correspond to the target objects and the noise objects, respectively. Taking positive exemplar discovery as an example, Grounding DINO assigns logits value " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "S_{i}^{p} = \\{s_{i,j}\\}_{j=0}^{m}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " to all candidate bounding boxes " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "B_{i}^{p} = \\{b_{i,j}\\}_{j=0}^{m}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " based on " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "T_{i}^{p}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " denotes the number of candidate boxes within the image. For the " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": "-th box in the " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": "-th image, " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "s_{i,j}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " represents the likelihood that " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "b_{i,j}" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " belongs to the specified class text " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "C_i" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": ". The output of positive candidate boxes " + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "inline_equation", + "content": "\\mathcal{O}^p" + }, + { + "bbox": [ + 130, + 289, + 483, + 407 + ], + "type": "text", + "content": " can be formulated as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 225, + 416, + 482, + 432 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 416, + 482, + 432 + ], + "spans": [ + { + "bbox": [ + 225, + 416, + 482, + 432 + ], + "type": "interline_equation", + "content": "\\mathcal {O} ^ {p} = \\{G (I _ {i}, T _ {i} ^ {p}) \\} _ {i = 0} ^ {k} = \\{(B _ {i} ^ {p}, \\mathcal {S} _ {i} ^ {p}) \\} _ {i = 0} ^ {k}, \\tag {3}", + "image_path": "246737daefbde252ba3483faae1d43d835ead2c6b0547fac04e276298dc8ebaf.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 438, + 387, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 438, + 387, + 450 + ], + "spans": [ + { + "bbox": [ + 130, + 438, + 387, + 450 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 438, + 387, + 450 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 130, + 438, + 387, + 450 + ], + "type": "text", + "content": " denotes the number of images in the training set." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 451, + 482, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 451, + 482, + 510 + ], + "spans": [ + { + "bbox": [ + 130, + 451, + 482, + 510 + ], + "type": "text", + "content": "Negative Samples and Dedduplication. To minimize the impact of irrelevant classes on the counting accuracy of the target object, we adopt a filtering method for negative samples. Initially, we obtain all candidate bounding boxes for objects within each image. Similar to Eq. (3), the negative candidate boxes " + }, + { + "bbox": [ + 130, + 451, + 482, + 510 + ], + "type": "inline_equation", + "content": "\\mathcal{O}^n" + }, + { + "bbox": [ + 130, + 451, + 482, + 510 + ], + "type": "text", + "content": " without filtering can be formulated as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 221, + 518, + 482, + 534 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 221, + 518, + 482, + 534 + ], + "spans": [ + { + "bbox": [ + 221, + 518, + 482, + 534 + ], + "type": "interline_equation", + "content": "\\mathcal {O} ^ {n} = \\left\\{G \\left(I _ {i}, T _ {i} ^ {n}\\right) \\right\\} _ {i = 0} ^ {k} = \\left\\{\\left(B _ {i} ^ {n}, \\mathcal {S} _ {i} ^ {n}\\right) \\right\\} _ {i = 0} ^ {k}, \\tag {4}", + "image_path": "56c5449266234b5a2c6abb81cb34335b3b6aae08b5ddb8fd9ff6f1908eff7460.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "spans": [ + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "text", + "content": "where for each image " + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "inline_equation", + "content": "I_{i}" + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "text", + "content": ", the term " + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "inline_equation", + "content": "T_{i}^{n} =" + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "text", + "content": " \"object\" is employed to identify and generate all bounding boxes " + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "inline_equation", + "content": "B^{n}" + }, + { + "bbox": [ + 130, + 541, + 482, + 577 + ], + "type": "text", + "content": " within that image. This method guarantees the detection of bounding boxes for all objects present in the image." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "spans": [ + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": "Then, for each image " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "I_{i}" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": ", we assess each bounding box " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "b^{n}" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": " from the negative candidate boxes " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "B^n" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": ", and each " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "b^{n}" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": " is evaluated to determine its uniqueness in relation to the boxes within " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "B^{p}" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": ". Specifically, a bounding box is deemed unique if its overlap with any box in " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "B^{p}" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": " is minimal, based on the Intersection over Union (IoU) threshold " + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "inline_equation", + "content": "\\tau_{\\mathrm{iou}}" + }, + { + "bbox": [ + 130, + 578, + 482, + 637 + ], + "type": "text", + "content": ", which can be formulated as:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 250, + 646, + 482, + 669 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 646, + 482, + 669 + ], + "spans": [ + { + "bbox": [ + 250, + 646, + 482, + 669 + ], + "type": "interline_equation", + "content": "\\operatorname {I o U} \\left(B ^ {p}, B ^ {n}\\right) = \\frac {B ^ {p} \\cap B ^ {n}}{B ^ {p} \\cup B ^ {n}}, \\tag {5}", + "image_path": "6e575d8ba3c3408d8c139910dd34949c97f18df5daf9bf3bf5fb4b8e7497140a.jpg" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 481, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "B^p \\cap B^n" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "B^p \\cup B^n" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": " denotes the intersection and union between positive " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "B^p" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": " and negative " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "B^n" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": " boxes. Unique negative boxes " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "b^n" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": " are then included in the final set " + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "inline_equation", + "content": "B_{\\text{filtered}}^n" + }, + { + "bbox": [ + 130, + 116, + 479, + 152 + ], + "type": "text", + "content": " of negative exemplars." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 152, + 480, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 152, + 480, + 248 + ], + "spans": [ + { + "bbox": [ + 130, + 152, + 480, + 248 + ], + "type": "text", + "content": "Single Object Exemplar Filtering. While DINO excels at identifying targets for arbitrary classes, each candidate box does not always contain a single object because boxes encompassing multiple objects may carry higher confidence levels than boxes of single objects. To ensure the integrity of the visual connections established with images, it's imperative to select exemplars that exclusively contain a single object. To achieve this, we treat singular discrimination as a binary classification task, using the binary classifier " + }, + { + "bbox": [ + 130, + 152, + 480, + 248 + ], + "type": "inline_equation", + "content": "\\delta(\\cdot)" + }, + { + "bbox": [ + 130, + 152, + 480, + 248 + ], + "type": "text", + "content": " to refine candidate bounding boxes, ensuring each exemplar contains a single object." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "spans": [ + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "type": "text", + "content": "As shown in Fig. 3, " + }, + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "type": "inline_equation", + "content": "\\delta(\\cdot)" + }, + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "type": "text", + "content": " leverages a frozen Clip-vit backbone, integrated with a trainable Feed-Forward Network (FFN) for binary classification tasks. Training data is meticulously curated, consisting of samples of single and multiple objects. The labeled single-object samples are the exemplars in the training sets, and the labeled multi-object samples consist of randomly cropped patches and the entire image. To ensure that the class-agnostic counting is maintained, the training data is split for training and evaluation with disjoint samples, ensuring robust exemplar assessment. The classification results for positive candidate boxes " + }, + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "type": "inline_equation", + "content": "b^{p} \\in B^{p}" + }, + { + "bbox": [ + 130, + 248, + 299, + 475 + ], + "type": "text", + "content": " can be formulated as:" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 308, + 272, + 476, + 421 + ], + "blocks": [ + { + "bbox": [ + 308, + 272, + 476, + 421 + ], + "lines": [ + { + "bbox": [ + 308, + 272, + 476, + 421 + ], + "spans": [ + { + "bbox": [ + 308, + 272, + 476, + 421 + ], + "type": "image", + "image_path": "1d2a5d26f6d67a0f1c228b374e846fa3da98af34ff3c22f4d036d8bd4fce9f35.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 304, + 428, + 482, + 473 + ], + "lines": [ + { + "bbox": [ + 304, + 428, + 482, + 473 + ], + "spans": [ + { + "bbox": [ + 304, + 428, + 482, + 473 + ], + "type": "text", + "content": "Fig. 3: Illustration of the single object exemplar filtering with a frozen Clip-vit encoder and a trainable FFN to distinguish single from multiple objects." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 146, + 485, + 299, + 498 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 485, + 299, + 498 + ], + "spans": [ + { + "bbox": [ + 146, + 485, + 299, + 498 + ], + "type": "interline_equation", + "content": "\\delta \\left(b ^ {p}\\right) = \\operatorname {F F N} \\left(\\operatorname {C l i p - v i t} \\left(b ^ {p}\\right)\\right), \\tag {6}", + "image_path": "e40ba82eef587fd70abe261b85c9a6a0b453ad44a0b5663dadd771b2908b4305.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "spans": [ + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "type": "text", + "content": "and the filtered set " + }, + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "type": "inline_equation", + "content": "B_{\\mathrm{new}}" + }, + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "type": "text", + "content": " contains bounding boxes " + }, + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "type": "inline_equation", + "content": "b^{p}" + }, + { + "bbox": [ + 130, + 508, + 480, + 532 + ], + "type": "text", + "content": " that are conditioned on the classification results, which can be formulated as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 238, + 542, + 482, + 556 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 542, + 482, + 556 + ], + "spans": [ + { + "bbox": [ + 238, + 542, + 482, + 556 + ], + "type": "interline_equation", + "content": "B _ {\\text {n e w}} ^ {p} \\leftarrow B _ {\\text {n e w}} ^ {p} \\cup \\{b | \\delta (b ^ {p}) = 1 \\}, \\tag {7}", + "image_path": "6dfabc19d677a54c5bf18c755461fedc17b216812b571b91aef9a7f44bc96d01.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "spans": [ + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "text", + "content": "where the symbol " + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "inline_equation", + "content": "\\leftarrow" + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "text", + "content": " signifies the update operation for the set " + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "inline_equation", + "content": "B_{\\mathrm{new}}^p" + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "text", + "content": ", and the set builder notation " + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "inline_equation", + "content": "\\{b|\\delta(b^p) = 1\\}" + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "text", + "content": " represents the collection of bounding boxes for which " + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "inline_equation", + "content": "\\delta(b^p)" + }, + { + "bbox": [ + 130, + 565, + 480, + 601 + ], + "type": "text", + "content": " predicts a positive outcome." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 131, + 620, + 294, + 632 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 620, + 294, + 632 + ], + "spans": [ + { + "bbox": [ + 131, + 620, + 294, + 632 + ], + "type": "text", + "content": "3.3 Noise Suppression Module" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "text", + "content": "In the context of the EEM, text-image alignment is redefined as object-image alignment by identifying positive " + }, + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "inline_equation", + "content": "B^{p}" + }, + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "text", + "content": " and negative " + }, + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "inline_equation", + "content": "B^{n}" + }, + { + "bbox": [ + 130, + 641, + 480, + 665 + ], + "type": "text", + "content": " exemplars. We delves" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "type": "text", + "content": "into generating positive and negative density maps and alleviating the noise introduced by the negative exemplars." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": "Initially, for each image " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "I_{i}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": ", we select the top three patches with the highest " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "S^p" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " from the positive candidate boxes " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "B_{\\mathrm{new}}^p" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " as positive exemplars " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "E^{p} = \\{b_{i}^{p}\\}_{i = 1}^{k}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " and the top three patches with the highest " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "S^n" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " from the negative candidate boxes " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "B_{\\mathrm{filtered}}^n" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " as negative exemplars " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "E^n = \\{b_i^n\\}_{i = 1}^k" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": ". Following CounTR [19], we build the Counter " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "\\Gamma (\\cdot)" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " with feature interaction to fuse information from both image encoders. Specifically, we merge encoder outputs by using image features as queries and the linear projections of sample features as keys and values, ensuring dimension consistency with image features, in accordance with the self-similarity principle in counting, which can be formulated as:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 197, + 252, + 482, + 266 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 252, + 482, + 266 + ], + "spans": [ + { + "bbox": [ + 197, + 252, + 482, + 266 + ], + "type": "interline_equation", + "content": "\\boldsymbol {F} _ {\\text {f u s e}} = \\Gamma_ {\\text {f u s e}} \\left(\\boldsymbol {F} _ {\\text {q u e r y}}, \\boldsymbol {W} ^ {k} \\boldsymbol {F} _ {\\text {k e y}}, \\boldsymbol {W} ^ {v} \\boldsymbol {F} _ {\\text {v a l u e}}\\right) \\in \\mathbb {R} ^ {M \\times D}, \\tag {8}", + "image_path": "851f2593330211bc0a261144f7f6b3b81bf27e388e065fc2a29cf1df9d2b3364.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "spans": [ + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "\\pmb{F}" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": " denotes the feature representations, " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "\\pmb{W}^k" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "\\pmb{W}^v" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": " are the learnable weights for keys and values from " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "\\{E^p,E^n\\}" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": " denotes the number of tokens, " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": " is the feature dimensionality, and " + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^{M\\times D}" + }, + { + "bbox": [ + 130, + 270, + 480, + 330 + ], + "type": "text", + "content": " the space of the feature matrix. The decoder outputs the density heatmap after up-sampling the fused features to the input image's dimensions:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 210, + 335, + 482, + 349 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 210, + 335, + 482, + 349 + ], + "spans": [ + { + "bbox": [ + 210, + 335, + 482, + 349 + ], + "type": "interline_equation", + "content": "D _ {i} ^ {n} = \\Gamma_ {\\text {d e c o d e}} \\left(\\boldsymbol {F} _ {\\text {f u s e}} ^ {n}\\right), \\quad D _ {i} ^ {p} = \\Gamma_ {\\text {d e c o d e}} \\left(\\boldsymbol {F} _ {\\text {f u s e}} ^ {p}\\right). \\tag {9}", + "image_path": "ffa92046636c7180eca00d627e1cb4d55d6555466dfc2ad5f145ee7433861055.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 353, + 482, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 353, + 482, + 460 + ], + "spans": [ + { + "bbox": [ + 130, + 353, + 482, + 460 + ], + "type": "text", + "content": "Contrastive Learning and Loss Functions. The objective of the NSM in VA-Count is to reduce the impact of noise in images on counting performance while ensuring the accuracy of density map predictions. To achieve this, a contrastive loss " + }, + { + "bbox": [ + 130, + 353, + 482, + 460 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_C" + }, + { + "bbox": [ + 130, + 353, + 482, + 460 + ], + "type": "text", + "content": " is proposed, using specified class density maps as positive samples and non-specified class density maps as negative samples. This involves maximizing the similarity between positive density maps and the ground-truth density maps and minimizing the similarity between negative density maps and the ground-truth density maps, as detailed in Eq. (10). To guide density map generation, we use the loss method from CounTR [19]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "spans": [ + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "content": "The density loss " + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_D" + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "content": " is calculated as the mean squared error between each pixel of the density map " + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "inline_equation", + "content": "D_i^p" + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "content": " generated for positive samples and the ground-truth density map " + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "inline_equation", + "content": "D_i^g" + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "content": ", as shown in Eq. (11). " + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 130, + 460, + 482, + 508 + ], + "type": "text", + "content": " respectively denote the height and width of the density map." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 172, + 513, + 482, + 538 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 172, + 513, + 482, + 538 + ], + "spans": [ + { + "bbox": [ + 172, + 513, + 482, + 538 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {C} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\\right) = - \\log \\frac {\\exp \\sin \\left(D ^ {p} , D ^ {g}\\right)}{\\exp \\sin \\left(D ^ {p} , D ^ {g}\\right) + \\exp \\sin \\left(D ^ {n} , D ^ {g}\\right)}, \\tag {10}", + "image_path": "a842904c500f37a9109bf0ea7086c9186287c1ea88e57939d90528ed1b73fa6a.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 225, + 545, + 482, + 567 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 545, + 482, + 567 + ], + "spans": [ + { + "bbox": [ + 225, + 545, + 482, + 567 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {D} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}\\right) = \\frac {1}{H W} \\sum \\left\\| D _ {i} ^ {p} - D _ {i} ^ {g} \\right\\| _ {2} ^ {2}, \\tag {11}", + "image_path": "57b3b41ee0ff43b5709c22cf10cd16449ac14b47d3a5c92a08fb93e065e9bf04.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 238, + 578, + 481, + 590 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 578, + 481, + 590 + ], + "spans": [ + { + "bbox": [ + 238, + 578, + 481, + 590 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\text {t o t a l}} \\left(D _ {i} ^ {p}, D _ {i} ^ {g}, D _ {i} ^ {n}\\right) = \\mathcal {L} _ {C} + \\mathcal {L} _ {D}. \\tag {12}", + "image_path": "99214b965d2253cef9412aa8e10188e58cd20b3f32802494392ff7a32e6273f1.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 131, + 604, + 277, + 617 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 604, + 277, + 617 + ], + "spans": [ + { + "bbox": [ + 131, + 604, + 277, + 617 + ], + "type": "text", + "content": "4 Experimental Result" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 131, + 625, + 349, + 637 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 625, + 349, + 637 + ], + "spans": [ + { + "bbox": [ + 131, + 625, + 349, + 637 + ], + "type": "text", + "content": "4.1 Datasets and Implementation Details" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": "Datasets. FSC-147 [10] dataset is tailored for class-agnostic counting with 6,135 images and 147 classes. Unique for its non-overlapping class subsets, it" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 448, + 103 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 139 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 139 + ], + "type": "text", + "content": "provides class labels and dot annotations for zero-shot counting using textual prompts." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 479, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 479, + 163 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 479, + 163 + ], + "type": "text", + "content": "CARPK [11] dataset offers a bird's-eye view of 89,777 cars in 1,448 parking lot images, testing the method's cross-dataset transferability and adaptability." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 164, + 480, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 164, + 480, + 209 + ], + "spans": [ + { + "bbox": [ + 130, + 164, + 480, + 209 + ], + "type": "text", + "content": "Evaluation Metrics. Following previous class-agnostic object counting methods [29], the evaluation metrics employed are Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). MAE is widely used to assess model accuracy, while RMSE evaluates model robustness." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "spans": [ + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "text", + "content": "Exemplar Enhancement Module uses Grounding DINO" + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "inline_equation", + "content": "^7" + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "text", + "content": " for bounding box proposals, setting the threshold " + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "inline_equation", + "content": "\\tau_{l}" + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "text", + "content": " to 0.02. For negative sample filtering, the IoU threshold " + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "inline_equation", + "content": "\\tau_{\\mathrm{iou}}" + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "text", + "content": " is set to 0.5. The single object classifier employs CLIP ViT-B/16" + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "inline_equation", + "content": "^8" + }, + { + "bbox": [ + 130, + 212, + 480, + 269 + ], + "type": "text", + "content": " as its backbone, with an FFN comprising two linear layers, trained over 100 epochs at a learning rate of e-4. The dataset is partitioned in a 7:3 ratio" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 272, + 480, + 305 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 272, + 480, + 305 + ], + "spans": [ + { + "bbox": [ + 130, + 272, + 480, + 305 + ], + "type": "text", + "content": "Noise Suppression Module follows CounTR's [19] two-stage training: MAE pretraining and AdamW [25]-optimized fine-tuning. It is trained on FSC-147 with a learning rate of " + }, + { + "bbox": [ + 130, + 272, + 480, + 305 + ], + "type": "inline_equation", + "content": "10^{-5}" + }, + { + "bbox": [ + 130, + 272, + 480, + 305 + ], + "type": "text", + "content": ", batch size of 8, on an NVIDIA RTX L40 GPU." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 131, + 327, + 356, + 339 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 327, + 356, + 339 + ], + "spans": [ + { + "bbox": [ + 131, + 327, + 356, + 339 + ], + "type": "text", + "content": "4.2 Comparison with the State-of-the-Arts" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 347, + 480, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 347, + 480, + 393 + ], + "spans": [ + { + "bbox": [ + 130, + 347, + 480, + 393 + ], + "type": "text", + "content": "For the performance evaluation of our method, it is benchmarked against a variety of state-of-the-art few-shot and zero-shot counting methods on FSC-147. Additionally, we evaluate our method in comparison with class-specific counting models on CARPK." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 396, + 480, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 396, + 480, + 491 + ], + "spans": [ + { + "bbox": [ + 130, + 396, + 480, + 491 + ], + "type": "text", + "content": "Quantitative Result on FSC-147. We evaluate the effectiveness of VA-Count on FSC-147, comparing it with state-of-the-art counting methods as detailed in Tab. 1. Our method surpasses the exemplar-discovery method ZSC [45], demonstrating that the exemplars found by VA-Count are of higher quality. VA-Count achieves the best performance in MAE and second in RMSE, validating our method's effectiveness. Despite being second in RMSE, it still outperforms ZSC. In comparison with CLIP-Count [13], VA-Count, due to some noise introduction, has a few inferior samples but, overall, surpasses CLIP-Count in performance." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 491, + 480, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 491, + 480, + 586 + ], + "spans": [ + { + "bbox": [ + 130, + 491, + 480, + 586 + ], + "type": "text", + "content": "Quantitative Result on CARPK. In Tab. 2, VA-Count's cross-domain and non-cross-domain performance on CARPK are compared with previous methods. In the zero-shot group, VA-Count achieves the best performance, particularly with its cross-domain performance methoding that of the few-shot group, demonstrating its outstanding transferability. It is worth noting that employing " + }, + { + "bbox": [ + 130, + 491, + 480, + 586 + ], + "type": "inline_equation", + "content": "\\varPhi(\\cdot)" + }, + { + "bbox": [ + 130, + 491, + 480, + 586 + ], + "type": "text", + "content": " significantly reduces errors compared to directly using the Grounding DINO [20] method. In the absence of any training data, VA-Count outperforms FamNet [33] in the cross-domain group." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 587, + 479, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 587, + 479, + 634 + ], + "spans": [ + { + "bbox": [ + 130, + 587, + 479, + 634 + ], + "type": "text", + "content": "Ablation Study. We conduct both quantitative and qualitative analyses on the contributions of each component in our proposed VA-Count, which includes the Grounding-DINO candidate box extraction and filtering module. The quantitative outcomes are presented in Tab. 3. Using only Grounding DINO method" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 133, + 642, + 361, + 653 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 642, + 361, + 653 + ], + "spans": [ + { + "bbox": [ + 133, + 642, + 361, + 653 + ], + "type": "text", + "content": "7 https://github.com/IDEA-Research/GroundingDINO" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 134, + 653, + 279, + 665 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 653, + 279, + 665 + ], + "spans": [ + { + "bbox": [ + 134, + 653, + 279, + 665 + ], + "type": "text", + "content": "8 https://github.com/openai/CLIP" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 168, + 481, + 411 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 482, + 159 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 482, + 159 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 482, + 159 + ], + "type": "text", + "content": "Table 1: Quantitative results of our VA-Count and other state-of-the-art competitors on FSC-147. F-S, R-F, and Z-S are abbreviated for Few-shot, Reference-free, and Zero-shot settings. Best results for each scheme and the second-best results at the zero-shot setting are highlighted in bold and underline." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 168, + 481, + 411 + ], + "lines": [ + { + "bbox": [ + 133, + 168, + 481, + 411 + ], + "spans": [ + { + "bbox": [ + 133, + 168, + 481, + 411 + ], + "type": "table", + "html": "
SchemeMethodVenueShotVal SetTest SetAvg
MAERMSEMAERMSEMAERMSE
F-SFamNet [33]CVPR'21324.3270.9422.56101.5423.4486.24
CFOCNet [46]WACV'21321.1961.4122.10112.7121.6587.06
CounTR [19]BMVC'22313.1349.8311.9591.2312.5470.53
LOCA [41]ICCV'23310.2432.5610.9756.9710.6144.77
SAM [36]WACV'243--19.95132.1619.95132.16
PseCo [12]CVPR'24315.3168.3413.05112.8614.1890.60
CACViT [42]AAAI'24310.6337.959.1348.969.8843.46
FamNet [33]CVPR'21126.0577.0126.76110.9526.4193.98
R-FFamNet [33]CVPR'21032.1598.7532.27131.4632.21115.11
RepRPN-C [32]ACCV'22029.2498.1126.66129.1127.95113.61
CounTR [19]BMVC'22018.0771.8414.71106.8716.3989.36
RCC [10]CVPR'23017.4958.8117.12104.5317.3181.67
LOCA [41]ICCV'23017.4354.9616.22103.9616.8379.46
Z-SZSC [45]CVPR'23026.9388.6322.09115.1724.51101.90
CLIP-Count [13]MM'23018.7961.1817.78106.6218.28583.90
PseCo [12]CVPR'24023.90100.3316.58129.7720.24115.05
VA-CountOurs017.8773.2217.88129.3117.87101.26
", + "image_path": "bc800a4b540758ef1feb6691023a357ec2f832914e186e0cfa16f7c02cd017e8.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 434, + 482, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 434, + 482, + 518 + ], + "spans": [ + { + "bbox": [ + 132, + 434, + 482, + 518 + ], + "type": "text", + "content": "(first row) achieves an error of 52.82 without training, which, although not as accurate as regression-based methods, ensures the detection of relevant objects. Performance improves slightly after adding a single-object classification filter (second row). With training based on " + }, + { + "bbox": [ + 132, + 434, + 482, + 518 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_D" + }, + { + "bbox": [ + 132, + 434, + 482, + 518 + ], + "type": "text", + "content": ", it already meets counting requirements. In Tab. 2, we compare using Grounding DINO alone and with a single-object classification filter on CARPK (last three rows). Our binary classifier significantly improves performance, reducing MAE and RMSE by about 10." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 536, + 262, + 548 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 536, + 262, + 548 + ], + "spans": [ + { + "bbox": [ + 133, + 536, + 262, + 548 + ], + "type": "text", + "content": "4.3 Qualitative Analysis" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 558, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 558, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 558, + 482, + 665 + ], + "type": "text", + "content": "Analysis of the zero-shot performance. To further ensure the effectiveness of the proposed VA-Count framework, we visualize qualitative results in Fig. 4. We provide a side-by-side comparison of the proposed VA-Count against the few-shot counting method [19]. VA-Count achieves a remarkable resemblance to the ground truth, showcasing the method's nuanced understanding of object boundaries and densities and being less affected by the background noise. Specifically, the first row shows there exists a golden egg drowned by white eggs. The few-shot method struggled with this nuanced differentiation, failing to recognize the golden egg distinctly. In the second row, strawberries near flowers also confound the few-shot" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 245, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 245, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 157, + 478, + 301 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": "Table 2: Quantitative results of our VA-Count and other state-of-the-art competitors on CARPK. " + }, + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "inline_equation", + "content": "\\varPhi(\\cdot)" + }, + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": " denotes the single-object classification filter. C and F denote CARPK and FSC-147, respectively." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 157, + 478, + 301 + ], + "lines": [ + { + "bbox": [ + 133, + 157, + 478, + 301 + ], + "spans": [ + { + "bbox": [ + 133, + 157, + 478, + 301 + ], + "type": "table", + "html": "
MethodsVenueShotC → CF → C
MAERMSEMAERMSE
FamNet [33]CVPR'21318.1933.6628.8444.47
GMN [26]CVPR'2137.489.90--
BMNet+ [35]CVPR'2235.767.8310.4413.77
CounTR [19]BMVC'2235.757.45--
RCC [10]CVPR'2309.2111.3321.3826.61
CLIP-Count [13]MM'230--11.9616.61
Grounding DINO [20]arXiv'24029.7231.6029.7231.60
Grounding DINO + Φ(·)Ours018.5421.7118.5421.71
VA-CountOurs08.7510.3010.6313.20
", + "image_path": "1fcf8d1de462b532980f2f158bcfeed57a47bc7acb0fe35ab00b3046f4f87284.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 143, + 366, + 468, + 451 + ], + "blocks": [ + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "lines": [ + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "spans": [ + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "text", + "content": "Table 3: Ablation study on each component's contribution to the final results on FSC-147. We demonstrate the effectiveness of two parts of our framework and two types of loss: " + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "inline_equation", + "content": "G(\\cdot)" + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "text", + "content": " for Grounding DINO, " + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "inline_equation", + "content": "\\varPhi(\\cdot)" + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "text", + "content": " for the single-object filtering section, the density loss " + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_D" + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "text", + "content": ", and the contrastive loss " + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_C" + }, + { + "bbox": [ + 130, + 312, + 482, + 357 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 143, + 366, + 468, + 451 + ], + "lines": [ + { + "bbox": [ + 143, + 366, + 468, + 451 + ], + "spans": [ + { + "bbox": [ + 143, + 366, + 468, + 451 + ], + "type": "table", + "html": "
G(·)φ(·)LDLCVal SetTest Set
MAERMSEMAERMSE
52.82134.4954.48159.30
52.12135.2954.27159.76
19.6373.9418.93116.65
17.8773.2217.88129.31
", + "image_path": "85e4542d8aae4b32c2a561826655ca94e7b491d7244e2d3da03e6fab81696126.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 473, + 482, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 473, + 482, + 593 + ], + "spans": [ + { + "bbox": [ + 130, + 473, + 482, + 593 + ], + "type": "text", + "content": "method. These examples emphasize VA-Count's superior ability to identify and differentiate between objects with minor differences. The third row presents a challenging scenario with dense keys partially occluded by hands. This situation tests the model's ability to count tiny, closely situated objects under partial occlusion, showcasing VA-Count's advanced capability to accurately identify and count such challenging objects, which is significantly better than the few-shot method. These results highlight the impact of exemplar selection and the incorporation of negative patches in VA-Count, significantly enhancing its object counting and localization capabilities, and showcasing its innovation in zero-shot object counting." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "content": "Analysis of Positive and Negative Exemplars. To make our experiment more straightforward, we also conduct a qualitative analysis of the patch selection. As shown in Fig. 5 and Fig. 6, we illustrate selected positive and negative patches for various categories under a zero-shot setting. Taking a closer look at the positive patches for categories such as crab cakes and green peas, the results show a high degree of accuracy in the model's ability to isolate and highlight the regions" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 101 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 136, + 118, + 479, + 262 + ], + "blocks": [ + { + "bbox": [ + 136, + 118, + 479, + 262 + ], + "lines": [ + { + "bbox": [ + 136, + 118, + 479, + 262 + ], + "spans": [ + { + "bbox": [ + 136, + 118, + 479, + 262 + ], + "type": "image", + "image_path": "1c22ff0b32e2acf775447d31e4fee0243f2bb657543e369a64bc9e81e7b23d7f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 277, + 483, + 301 + ], + "lines": [ + { + "bbox": [ + 132, + 277, + 483, + 301 + ], + "spans": [ + { + "bbox": [ + 132, + 277, + 483, + 301 + ], + "type": "text", + "content": "Fig. 4: Illustration of heatmaps compared with few-shot method [19] on FSC-147. Predicted density map is overlaid on the original RGB image. (Best viewed in zoom in)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 136, + 317, + 479, + 469 + ], + "blocks": [ + { + "bbox": [ + 136, + 317, + 479, + 469 + ], + "lines": [ + { + "bbox": [ + 136, + 317, + 479, + 469 + ], + "spans": [ + { + "bbox": [ + 136, + 317, + 479, + 469 + ], + "type": "image", + "image_path": "0d246025080979b318b5e0ba1f9fec8f92f20ee187876a5bfe402eea4ee12e6f.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 479, + 482, + 491 + ], + "lines": [ + { + "bbox": [ + 132, + 479, + 482, + 491 + ], + "spans": [ + { + "bbox": [ + 132, + 479, + 482, + 491 + ], + "type": "text", + "content": "Fig. 5: Illustration of the positive (Pos.) and negative (Neg.) exemplars on FSC-147." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 519, + 482, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 519, + 482, + 615 + ], + "spans": [ + { + "bbox": [ + 130, + 519, + 482, + 615 + ], + "type": "text", + "content": "containing the target objects. This precision underscores the effectiveness of VA-Count framework in discerning relevant features amidst complex backgrounds, affirming its robustness in the exemplar discovery. Negative patches, especially from categories like strawberries and crab cakes, highlight the model's challenges with visually similar or overlapping areas not in the target category, underscoring the need for improved discriminative abilities. This analysis underscores our paper's impact on zero-shot object counting and the importance of refining visual learning and exemplar selection for future advancements." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 617, + 483, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 483, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 483, + 666 + ], + "type": "text", + "content": "Effective of the object exemplar filter. The effectiveness of the object exemplar filter is further evaluated by comparing visualization grounding results with and without the filter. Fig. 7 illustrates this comparison for the category of cars on CARPK. Images without the filter show multiple cars within a single" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 116, + 218, + 175 + ], + "blocks": [ + { + "bbox": [ + 134, + 116, + 218, + 175 + ], + "lines": [ + { + "bbox": [ + 134, + 116, + 218, + 175 + ], + "spans": [ + { + "bbox": [ + 134, + 116, + 218, + 175 + ], + "type": "image", + "image_path": "334ca83d047f24e24263af78e29ddb2f2bdca53ef285741f6b906daddddca24f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 219, + 133, + 231, + 141 + ], + "lines": [ + { + "bbox": [ + 219, + 133, + 231, + 141 + ], + "spans": [ + { + "bbox": [ + 219, + 133, + 231, + 141 + ], + "type": "text", + "content": "Pos." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 234, + 124, + 304, + 176 + ], + "blocks": [ + { + "bbox": [ + 234, + 124, + 304, + 176 + ], + "lines": [ + { + "bbox": [ + 234, + 124, + 304, + 176 + ], + "spans": [ + { + "bbox": [ + 234, + 124, + 304, + 176 + ], + "type": "image", + "image_path": "64e0ddca09e4fc9b8f2d65940b5aa0dee89f59057a1abdfa9c71184970ca651e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 131, + 186, + 480, + 209 + ], + "lines": [ + { + "bbox": [ + 131, + 186, + 480, + 209 + ], + "spans": [ + { + "bbox": [ + 131, + 186, + 480, + 209 + ], + "type": "text", + "content": "Fig. 6: Illustration of the final positive (Pos.) and negative (Neg.) exemplars for images on CARPK." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 306, + 116, + 394, + 175 + ], + "blocks": [ + { + "bbox": [ + 306, + 116, + 394, + 175 + ], + "lines": [ + { + "bbox": [ + 306, + 116, + 394, + 175 + ], + "spans": [ + { + "bbox": [ + 306, + 116, + 394, + 175 + ], + "type": "image", + "image_path": "b9507ee3d3fee208ef7dbd4e764d2797cea354905eb16a6fa4668b87b7d3320c.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 394, + 133, + 408, + 141 + ], + "lines": [ + { + "bbox": [ + 394, + 133, + 408, + 141 + ], + "spans": [ + { + "bbox": [ + 394, + 133, + 408, + 141 + ], + "type": "text", + "content": "Pos." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 410, + 124, + 479, + 175 + ], + "blocks": [ + { + "bbox": [ + 410, + 124, + 479, + 175 + ], + "lines": [ + { + "bbox": [ + 410, + 124, + 479, + 175 + ], + "spans": [ + { + "bbox": [ + 410, + 124, + 479, + 175 + ], + "type": "image", + "image_path": "764fe6604ec99ed48edb1936e1211935050102981caf9145a9ffbaf6ef4139d3.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 136, + 225, + 250, + 310 + ], + "blocks": [ + { + "bbox": [ + 136, + 225, + 250, + 310 + ], + "lines": [ + { + "bbox": [ + 136, + 225, + 250, + 310 + ], + "spans": [ + { + "bbox": [ + 136, + 225, + 250, + 310 + ], + "type": "image", + "image_path": "9eba14f7ed8cb60de40d8c79a2ece9b8e62a41c3f0bca7c9f06b48782abf4ed4.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 131, + 319, + 480, + 341 + ], + "lines": [ + { + "bbox": [ + 131, + 319, + 480, + 341 + ], + "spans": [ + { + "bbox": [ + 131, + 319, + 480, + 341 + ], + "type": "text", + "content": "Fig. 7: Illustration of candidate boxes before and after exemplar filter for images on CARPK." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 251, + 225, + 364, + 308 + ], + "blocks": [ + { + "bbox": [ + 251, + 225, + 364, + 308 + ], + "lines": [ + { + "bbox": [ + 251, + 225, + 364, + 308 + ], + "spans": [ + { + "bbox": [ + 251, + 225, + 364, + 308 + ], + "type": "image", + "image_path": "ef39bb910e527418d63f3ccf849fa6cfb8c1d02b9a605299bdd35f823d0adf06.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 365, + 244, + 478, + 308 + ], + "blocks": [ + { + "bbox": [ + 365, + 244, + 478, + 308 + ], + "lines": [ + { + "bbox": [ + 365, + 244, + 478, + 308 + ], + "spans": [ + { + "bbox": [ + 365, + 244, + 478, + 308 + ], + "type": "image", + "image_path": "6ea60153dbf21e8f0398f553ecb771c32942188b4fd53e3ba739bb3726f61544.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 368, + 482, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 368, + 482, + 453 + ], + "spans": [ + { + "bbox": [ + 130, + 368, + 482, + 453 + ], + "type": "text", + "content": "bounding box, indicating Grounding DINO's [20] inability to isolate individual objects effectively. Conversely, images with the filter applied demonstrate a significant improvement, with bounding boxes accurately encompassing single cars. This clear distinction highlights the binary classifier's crucial role in ensuring precise object counting by enforcing the single-object criterion within each exemplar, validating the filter's contribution to enhancing the model's accuracy and reliability in VA-Count framework." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 478, + 220, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 478, + 220, + 491 + ], + "spans": [ + { + "bbox": [ + 132, + 478, + 220, + 491 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 510, + 483, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 483, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 483, + 666 + ], + "type": "text", + "content": "This paper addresses the challenges in class-agnostic object counting by introducing the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count effectively balances the need for scalability across arbitrary classes with the establishment of robust visual connections, overcoming the limitations of existing Zero-shot Object Counting (ZOC) methods. VA-Count comprises an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM), which are dedicated to refining exemplar identification and mitigating adverse impacts, respectively. The EEM utilizes advanced Vision-Language Pre-taining models like Grounding DINO for scalable exemplar discovery, while the NSM mitigates the impact of erroneous exemplars through contrastive learning. VA-Count shows promise in zero-shot counting, performing well on three datasets and offering precise visual associations and scalability. In the future, we will explore and better utilize advanced visual language models." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 114, + 240, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 114, + 240, + 129 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 240, + 129 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 213 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 213 + ], + "type": "text", + "content": "This work was supported in part by the National Natural Science Foundation of China under Grant 62271361, the Sanya Yazhou Bay Science and Technology City Administration scientific research project under Grant 2022KF0021, the Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2023B1515020097, and the National Research Foundation Singapore under the AI Singapore Programme under Grant AISG3-GV-2023-011." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 232, + 198, + 244 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 232, + 198, + 244 + ], + "spans": [ + { + "bbox": [ + 132, + 232, + 198, + 244 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 257, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 138, + 257, + 481, + 280 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 257, + 481, + 280 + ], + "spans": [ + { + "bbox": [ + 138, + 257, + 481, + 280 + ], + "type": "text", + "content": "1. Arteta, C., Lempitsky, V.S., Zisserman, A.: Counting in the wild. In: Proc. Eur. Conf. Comput. Vis. pp. 483-498 (2016)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 281, + 481, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 281, + 481, + 312 + ], + "spans": [ + { + "bbox": [ + 138, + 281, + 481, + 312 + ], + "type": "text", + "content": "2. Bai, Y., Cao, M., Gao, D., Cao, Z., Chen, C., Fan, Z., Nie, L., Zhang, M.: RaSa: Relation and sensitivity aware representation learning for text-based person search. In: Proc. Int. Joint Conf. Artif. Intell. pp. 555-563 (2023)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 313, + 481, + 334 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 313, + 481, + 334 + ], + "spans": [ + { + "bbox": [ + 138, + 313, + 481, + 334 + ], + "type": "text", + "content": "3. Bansal, A., Sikka, K., Sharma, G., Chellappa, R., Divakaran, A.: Zero-shot object detection. In: Proc. Eur. Conf. Comput. Vis. pp. 397-414 (2018)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 335, + 481, + 367 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 335, + 481, + 367 + ], + "spans": [ + { + "bbox": [ + 138, + 335, + 481, + 367 + ], + "type": "text", + "content": "4. Chai, L., Liu, Y., Liu, W., Han, G., He, S.: CrowdGAN: Identity-free interactive crowd video generation and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 44(6), 2856-2871 (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 368, + 481, + 400 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 368, + 481, + 400 + ], + "spans": [ + { + "bbox": [ + 138, + 368, + 481, + 400 + ], + "type": "text", + "content": "5. Chen, C., Ye, M., Jiang, D.: Towards modality-agnostic person re-identification with descriptive query. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15128-15137 (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 401, + 481, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 401, + 481, + 434 + ], + "spans": [ + { + "bbox": [ + 138, + 401, + 481, + 434 + ], + "type": "text", + "content": "6. Dou, Z., Kamath, A., Gan, Z., Zhang, P., Wang, J., Li, L., Liu, Z., Liu, C., LeCun, Y., Peng, N., Gao, J., Wang, L.: Coarse-to-fine vision-language pre-training with fusion in the backbone. In: Adv. Neural Inf. Process. Syst. pp. 32942-32956 (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 434, + 481, + 467 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 434, + 481, + 467 + ], + "spans": [ + { + "bbox": [ + 138, + 434, + 481, + 467 + ], + "type": "text", + "content": "7. Du, Y., Wei, F., Zhang, Z., Shi, M., Gao, Y., Li, G.: Learning to prompt for open-vocabulary object detection with vision-language model. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 14084-14093 (2022)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 468, + 481, + 489 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 468, + 481, + 489 + ], + "spans": [ + { + "bbox": [ + 138, + 468, + 481, + 489 + ], + "type": "text", + "content": "8. Gong, S., Zhang, S., Yang, J., Dai, D., Schiele, B.: Class-agnostic object counting robust to intraclass diversity. In: Proc. Eur. Conf. Comput. Vis. pp. 388-403 (2022)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 490, + 481, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 490, + 481, + 521 + ], + "spans": [ + { + "bbox": [ + 138, + 490, + 481, + 521 + ], + "type": "text", + "content": "9. He, S., Chen, W., Wang, K., Luo, H., Wang, F., Jiang, W., Ding, H.: Region generation and assessment network for occluded person re-identification. IEEE Trans. Inf. Forensics Secur. 19, 120–132 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "spans": [ + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "type": "text", + "content": "0. Hobley, M., Prisacariu, V.: Learning to count anything: Reference-less class-agnostic counting with weak supervision. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (2023)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 555, + 481, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 555, + 481, + 587 + ], + "spans": [ + { + "bbox": [ + 138, + 555, + 481, + 587 + ], + "type": "text", + "content": "1. Hsieh, M., Lin, Y., Hsu, W.H.: Drone-based object counting by spatially regularized regional proposal network. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 4165-4173 (2017)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 588, + 481, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 588, + 481, + 610 + ], + "spans": [ + { + "bbox": [ + 138, + 588, + 481, + 610 + ], + "type": "text", + "content": "2. Huang, Z., Dai, M., Zhang, Y., Zhang, J., Shan, H.: Point, segment and count: A generalized framework for object counting. arXiv:2311.12386 (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 611, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 611, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 138, + 611, + 481, + 632 + ], + "type": "text", + "content": "3. Jiang, R., Liu, L., Chen, C.: CLIP-Count: Towards text-guided zero-shot object counting. In: Proc. ACM Multimedia. pp. 4535-4545 (2023)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 633, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 633, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 633, + 481, + 665 + ], + "type": "text", + "content": "4. Kang, S., Moon, W., Kim, E., Heo, J.: VLCounter: Text-aware visual representation for zero-shot object counting. In: Proc. AAAI Conf. Artif. Intell. pp. 2714-2722 (2024)" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 482, + 666 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 133, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 482, + 149 + ], + "type": "text", + "content": "15. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W., Dollár, P., Girshick, R.B.: Segment anything. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 3992-4003 (2023)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 150, + 482, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 150, + 482, + 183 + ], + "spans": [ + { + "bbox": [ + 133, + 150, + 482, + 183 + ], + "type": "text", + "content": "16. Li, J., Li, D., Savarese, S., Hoi, S.C.H.: BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In: Proc. Int. Conf. Mach. Learn. pp. 19730-19742 (2023)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 183, + 482, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 183, + 482, + 215 + ], + "spans": [ + { + "bbox": [ + 132, + 183, + 482, + 215 + ], + "type": "text", + "content": "17. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Proc. Int. Conf. Mach. Learn. pp. 12888-12900 (2022)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 216, + 482, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 216, + 482, + 248 + ], + "spans": [ + { + "bbox": [ + 132, + 216, + 482, + 248 + ], + "type": "text", + "content": "18. Li, S., Sun, L., Li, Q.: CLIP-ReID: Exploiting vision-language model for image re-identification without concrete text labels. In: Proc. AAAI Conf. Artif. Intell. pp. 1405-1413 (2023)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 249, + 482, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 249, + 482, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 249, + 482, + 270 + ], + "type": "text", + "content": "19. Liu, C., Zhong, Y., Zisserman, A., Xie, W.: CounTR: Transformer-based generalised visual counting. In: Proc. Brit. Mach. Vis. Conf. p. 370 (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 271, + 482, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 271, + 482, + 303 + ], + "spans": [ + { + "bbox": [ + 132, + 271, + 482, + 303 + ], + "type": "text", + "content": "20. Liu, S., Zeng, Z., Ren, T., Li, F., Zhang, H., Yang, J., Li, C., Yang, J., Su, H., Zhu, J., Zhang, L.: Grounding DINO: Marrying DINO with grounded pre-training for open-set object detection. arXiv:2303.05499 (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 304, + 482, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 304, + 482, + 336 + ], + "spans": [ + { + "bbox": [ + 132, + 304, + 482, + 336 + ], + "type": "text", + "content": "21. Liu, X., Yang, J., Ding, W., Wang, T., Wang, Z., Xiong, J.: Adaptive mixture regression network with local counting map for crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 241-257 (2020)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 336, + 482, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 336, + 482, + 369 + ], + "spans": [ + { + "bbox": [ + 132, + 336, + 482, + 369 + ], + "type": "text", + "content": "22. Liu, Y., Ren, S., Chai, L., Wu, H., Xu, D., Qin, J., He, S.: Reducing spatial labeling redundancy for active semi-supervised crowd counting. IEEE Trans. Pattern Anal. Mach. Intell. 45(7), 9248-9255 (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 369, + 482, + 391 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 369, + 482, + 391 + ], + "spans": [ + { + "bbox": [ + 132, + 369, + 482, + 391 + ], + "type": "text", + "content": "23. Liu, Y., Wen, Q., Chen, H., Liu, W., Qin, J., Han, G., He, S.: Crowd counting via cross-stage refinement networks. IEEE Trans. Image Process. 29, 6800-6812 (2020)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 392, + 482, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 392, + 482, + 423 + ], + "spans": [ + { + "bbox": [ + 132, + 392, + 482, + 423 + ], + "type": "text", + "content": "24. Liu, Y., Xu, D., Ren, S., Wu, H., Cai, H., He, S.: Fine-grained domain adaptive crowd counting via point-derived segmentation. In: Proc. IEEE Int. Conf. Multimedia Expo. pp. 2363-2368 (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 424, + 482, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 424, + 482, + 445 + ], + "spans": [ + { + "bbox": [ + 132, + 424, + 482, + 445 + ], + "type": "text", + "content": "25. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proc. Int. Conf. Learn. Represent. (2019)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 446, + 482, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 446, + 482, + 468 + ], + "spans": [ + { + "bbox": [ + 132, + 446, + 482, + 468 + ], + "type": "text", + "content": "26. Lu, E., Xie, W., Zisserman, A.: Class-agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 669-684 (2019)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 468, + 482, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 468, + 482, + 500 + ], + "spans": [ + { + "bbox": [ + 132, + 468, + 482, + 500 + ], + "type": "text", + "content": "27. Ming, Y., Cai, Z., Gu, J., Sun, Y., Li, W., Li, Y.: Delving into out-of-distribution detection with vision-language representations. In: Adv. Neural Inf. Process. Syst. pp. 35087-35102 (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 501, + 482, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 501, + 482, + 533 + ], + "spans": [ + { + "bbox": [ + 132, + 501, + 482, + 533 + ], + "type": "text", + "content": "28. Mundhenk, T.N., Konjevod, G., Sakla, W.A., Boakye, K.: A large contextual dataset for classification, detection and counting of cars with deep learning. In: Proc. Eur. Conf. Comput. Vis. pp. 785-800 (2016)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 534, + 482, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 534, + 482, + 555 + ], + "spans": [ + { + "bbox": [ + 132, + 534, + 482, + 555 + ], + "type": "text", + "content": "29. Nguyen, T., Pham, C., Nguyen, K., Hoai, M.: Few-shot object counting and detection. In: Proc. Eur. Conf. Comput. Vis. pp. 348-365 (2022)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 555, + 482, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 555, + 482, + 599 + ], + "spans": [ + { + "bbox": [ + 132, + 555, + 482, + 599 + ], + "type": "text", + "content": "30. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Proc. Int. Conf. Mach. Learn. pp. 8748-8763 (2021)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 600, + 482, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 600, + 482, + 621 + ], + "spans": [ + { + "bbox": [ + 132, + 600, + 482, + 621 + ], + "type": "text", + "content": "31. Ranjan, V., Le, H.M., Hoai, M.: Iterative crowd counting. In: Proc. Eur. Conf. Comput. Vis. pp. 278-293 (2018)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 622, + 482, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 622, + 482, + 643 + ], + "spans": [ + { + "bbox": [ + 132, + 622, + 482, + 643 + ], + "type": "text", + "content": "32. Ranjan, V., Nguyen, M.H.: Exemplar free class agnostic counting. In: Proc. Asian Conf. Comput. Vis. pp. 71-87 (2022)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 644, + 482, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 644, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 132, + 644, + 482, + 666 + ], + "type": "text", + "content": "33. Ranjan, V., Sharma, U., Nguyen, T., Hoai, M.: Learning to count everything. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 3394-3403 (2021)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 220, + 100 + ], + "type": "text", + "content": "H. Zhu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 657 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "text", + "content": "34. Sam, D.B., Agarwalla, A., Joseph, J., Sindagi, V.A., Babu, R.V., Patel, V.M.: Completely self-supervised crowd counting via distribution matching. In: Proc. Eur. Conf. Comput. Vis. pp. 186-204 (2022)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 149, + 482, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 149, + 482, + 182 + ], + "spans": [ + { + "bbox": [ + 130, + 149, + 482, + 182 + ], + "type": "text", + "content": "35. Shi, M., Lu, H., Feng, C., Liu, C., Cao, Z.: Represent, compare, and learn: A similarity-aware framework for class-agnostic counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 9529–9538 (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 182, + 482, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 182, + 482, + 203 + ], + "spans": [ + { + "bbox": [ + 130, + 182, + 482, + 203 + ], + "type": "text", + "content": "36. Shi, Z., Sun, Y., Zhang, M.: Training-free object counting with prompts. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 323-331 (2024)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 203, + 482, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 203, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 203, + 482, + 236 + ], + "type": "text", + "content": "37. Song, S., Wan, J., Yang, Z., Tang, J., Cheng, W., Bai, X., Yao, C.: Vision-language pre-training for boosting scene text detectors. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15681-15691 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 236, + 482, + 268 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 236, + 482, + 268 + ], + "spans": [ + { + "bbox": [ + 130, + 236, + 482, + 268 + ], + "type": "text", + "content": "38. Sun, G., An, Z., Liu, Y., Liu, C., Sakaridis, C., Fan, D., Van Gool, L.: Indiscernible object counting in underwater scenes. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 13791-13801 (2023)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 268, + 482, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 268, + 482, + 289 + ], + "spans": [ + { + "bbox": [ + 130, + 268, + 482, + 289 + ], + "type": "text", + "content": "39. Tian, C., Zhang, X., Liang, X., Li, B., Sun, Y., Zhang, S.: Knowledge distillation with fast CNN for license plate detection. IEEE Trans. Intell. Transp. Syst. (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 289, + 482, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 482, + 333 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 482, + 333 + ], + "type": "text", + "content": "40. Tyagi, A.K., Mohapatra, C., Das, P., Makharia, G., Mehra, L., AP, P., Mausam: DeGPR: Deep guided posterior regularization for multi-class cell detection and counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 23913-23923 (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 333, + 482, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 333, + 482, + 365 + ], + "spans": [ + { + "bbox": [ + 130, + 333, + 482, + 365 + ], + "type": "text", + "content": "41. Dukic, N., Lukezic, A., Zavrtanik, V., Kristan, M.: A low-shot object counting network with iterative prototype adaptation. In: Proc. IEEE/CVF Int. Conf. Comput. Vis. pp. 18872-18881 (2023)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 365, + 482, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 365, + 482, + 398 + ], + "spans": [ + { + "bbox": [ + 130, + 365, + 482, + 398 + ], + "type": "text", + "content": "42. Wang, Z., Xiao, L., Cao, Z., Lu, H.: Vision transformer off-the-shelf: A surprising baseline for few-shot class-agnostic counting. In: Proc. AAAI Conf. Artif. Intell. pp. 5832-5840 (2024)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 398, + 482, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 398, + 482, + 430 + ], + "spans": [ + { + "bbox": [ + 130, + 398, + 482, + 430 + ], + "type": "text", + "content": "43. Xie, D., Liu, L., Zhang, S., Tian, J.: A unified multi-modal structure for retrieving tracked vehicles through natural language descriptions. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops. pp. 5418-5426 (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 430, + 482, + 463 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 430, + 482, + 463 + ], + "spans": [ + { + "bbox": [ + 130, + 430, + 482, + 463 + ], + "type": "text", + "content": "44. Xiong, Z., Chai, L., Liu, W., Liu, Y., Ren, S., He, S.: Glance to count: Learning to rank with anchors for weakly-supervised crowd counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 342-351 (2024)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 463, + 482, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 463, + 482, + 484 + ], + "spans": [ + { + "bbox": [ + 130, + 463, + 482, + 484 + ], + "type": "text", + "content": "45. Xu, J., Le, H., Nguyen, V., Ranjan, V., Samaras, D.: Zero-shot object counting. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 15548-15557 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 130, + 484, + 482, + 506 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 484, + 482, + 506 + ], + "spans": [ + { + "bbox": [ + 130, + 484, + 482, + 506 + ], + "type": "text", + "content": "46. Yang, S., Su, H., Hsu, W.H., Chen, W.: Class-agnostic few-shot object counting. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 869-877 (2021)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 506, + 482, + 538 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 506, + 482, + 538 + ], + "spans": [ + { + "bbox": [ + 130, + 506, + 482, + 538 + ], + "type": "text", + "content": "47. You, Z., Yang, K., Luo, W., Lu, X., Cui, L., Le, X.: Few-shot object counting with similarity-aware feature enhancement. In: Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. pp. 6304-6313 (2023)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 130, + 538, + 482, + 570 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 538, + 482, + 570 + ], + "spans": [ + { + "bbox": [ + 130, + 538, + 482, + 570 + ], + "type": "text", + "content": "48. Zhang, Z., Liu, K., Gao, F., Li, X., Wang, G.: Vision-based vehicle detecting and counting for traffic flow analysis. In: Proc. IEEE Int. Joint Conf. Neural Networks. pp. 2267-2273 (2016)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 571, + 482, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 571, + 482, + 592 + ], + "spans": [ + { + "bbox": [ + 130, + 571, + 482, + 592 + ], + "type": "text", + "content": "49. Zheng, Y., Wu, J., Qin, Y., Zhang, F., Cui, L.: Zero-shot instance segmentation. In: Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. pp. 2593-2602 (2021)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 130, + 592, + 482, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 592, + 482, + 624 + ], + "spans": [ + { + "bbox": [ + 130, + 592, + 482, + 624 + ], + "type": "text", + "content": "50. Zhu, H., Yuan, J., Zhong, X., Liao, L., Wang, Z.: Find gold in sand: Fine-grained similarity mining for domain-adaptive crowd counting. IEEE Trans. Multimedia 26, 3842-3855 (2024)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 130, + 624, + 482, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 624, + 482, + 657 + ], + "spans": [ + { + "bbox": [ + 130, + 624, + 482, + 657 + ], + "type": "text", + "content": "51. Zhu, H., Yuan, J., Zhong, X., Yang, Z., Wang, Z., He, S.: DAOT: Domain-agnostically aligned optimal transport for domain-adaptive crowd counting. In: Proc. ACM Multimedia. pp. 4319-4329 (2023)" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 244, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 244, + 91, + 447, + 102 + ], + "type": "text", + "content": "Zero-shot Object Counting with Good Exemplars" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_content_list.json b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d911efa2f356d8eee84addf8904375f6ad4aada8 --- /dev/null +++ b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_content_list.json @@ -0,0 +1,1532 @@ +[ + { + "type": "text", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "text_level": 1, + "bbox": [ + 233, + 138, + 769, + 186 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Soyeong Kwon*, Taegyeong Lee*, and Taehwan Kim", + "bbox": [ + 310, + 212, + 691, + 227 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Artificial Intelligence Graduate School, UNIST {soyoung17, taegyeonglee, taehwankim}@unist.ac.kr", + "bbox": [ + 310, + 239, + 691, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Text-guided image editing and generation methods have diverse real-world applications. However, text-guided infinite image synthesis faces several challenges. First, there is a lack of text-image paired datasets with high-resolution and contextual diversity. Second, expanding images based on text requires global coherence and rich local context understanding. Previous studies have mainly focused on limited categories, such as natural landscapes, and also required to train on high-resolution images with paired text. To address these challenges, we propose a novel approach utilizing Large Language Models (LLMs) for both global coherence and local context understanding, without any high-resolution text-image paired training dataset. We train the diffusion model to expand an image conditioned on global and local captions generated from the LLM and visual feature. At the inference stage, given an image and a global caption, we use the LLM to generate a next local caption to expand the input image. Then, we expand the image using the global caption, generated local caption and the visual feature to consider global consistency and spatial local context. In experiments, our model outperforms the baselines both quantitatively and qualitatively. Furthermore, our model demonstrates the capability of text-guided arbitrary-sized image generation in zero-shot manner with LLM guidance.", + "bbox": [ + 261, + 304, + 738, + 580 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Image outpainting $\\cdot$ Large language models (LLMs) $\\cdot$ Diffusion models", + "bbox": [ + 261, + 594, + 738, + 622 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 217, + 648, + 375, + 664 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recently the field of image generation has witnessed a significant advancement in synthesizing high-resolution images from text inputs. However, the existing studies [6,13,14,19] face difficulties in generating arbitrary-size image from text with diverse context because of the following challenges. Firstly, there is a lack of high-resolution text-image paired datasets with diverse contexts. Several high-resolution images [24] may not include rich context since most of them are online shopping product photos or individual portraits. Secondly, it is not just about repetitive expansion; it is essential to expand image depicting rich content based on given text description, while maintaining visual consistency [14]. Most prior", + "bbox": [ + 212, + 679, + 787, + 816 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Equal contributions (alphabetically ordered by last name.)", + "bbox": [ + 217, + 824, + 624, + 840 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "research [4,13,14] has focused on datasets [4,30] within limited categories, such as natural landscapes. Nevertheless, in the real world, it is desirable to depict the detailed surroundings beyond a given image, guided by textual descriptions, while ensuring visual consistency with the overall context. Therefore, unlike prior image outpainting models [4,7,11-14,25] that focus on limited datasets or unconditional image outpainting, we address this issue in a zero-shot manner by shifting the image autoregressively based on diverse contexts utilizing Large Language Models (LLMs).", + "bbox": [ + 212, + 146, + 787, + 267 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recent research [1,9,26,28] has demonstrated that LLMs can perform multimodal tasks, while understanding the visual content as text descriptions. Furthermore, as illustrated in Figure 1, we empirically find that LLMs are able to describe (and thus imagine) the scene beyond the image in text, using only the image captions. This shows that, with the LLMs, image captioning datasets can encompass diverse contexts extending beyond its resolution.", + "bbox": [ + 212, + 267, + 787, + 358 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "By leveraging the capabilities of the LLMs, we propose a novel approach that can expand an image to arbitrary size without the need for high-resolution, text-image paired datasets. Our model leverages the LLMs to incorporate global contextual information and uses a diffusion model to generate high-quality and coherent images across various contexts.", + "bbox": [ + 212, + 358, + 787, + 434 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address the lack of high-resolution text-image paired datasets with rich contexts, we utilize the LLMs to generate the captions that describe scenes beyond the image from the existing datasets [10, 15, 21]. We take a two-step process. As depicted in Figure 1 (a), first, we generate imaginary local captions outside of the image from the annotated caption of existing text-image paired datasets. Each of the generated captions describes details about individual unfolding scenes. Next, as shown in Figure 1 (b), we summarize the annotated caption and the generated local captions to create a global caption that describes the surroundings of the image for global and local context consistency.", + "bbox": [ + 212, + 434, + 787, + 571 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The global image caption describes the entire image beyond the local image, while the local captions provide semantic details for filling in the local masked image. We input these captions into our proposed diffusion model [22] as a textual condition to fill in the local masked image while maintaining the global context consistency as illustrated in Figure 2.", + "bbox": [ + 212, + 571, + 787, + 646 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In order to expand images guided by text while considering both global and local contexts, as illustrated in Figure 2, we train our model using global and local captions as textual conditions and CLIP [20] visual features as visual condition, with the local masked image serving as input. We make four local masked images by masking the top, bottom, left, and right sections. During inference, we expand the image gradually, by shifting patch by patch with LLM guidance. We input a generated local image into the LLM and it generates a next local caption in an autoregressive manner for expanding the image.", + "bbox": [ + 212, + 646, + 787, + 767 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Experimental results show that our model outperforms the baselines, demonstrating the ability to arbitrarily expand images in a zero-shot manner with text and generate realistic high-resolution images with rich context.", + "bbox": [ + 212, + 768, + 787, + 813 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our contributions are as follows:", + "bbox": [ + 238, + 816, + 568, + 830 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "- To the best of our knowledge, we are first to propose zero-shot text-guided infinite image synthesis without training on high resolution image. We introduce a novel approach with LLM guidance for zero-shot text-guided image outpainting.", + "bbox": [ + 223, + 146, + 785, + 207 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We can expand images preserving visual consistency by shifting local masked images in an autoregressive manner. Additionally, we can generate arbitrary-sized images that incorporate diverse contexts with global consistency by conditioning on the global caption and the local caption generated with LLM effectively.", + "bbox": [ + 225, + 209, + 785, + 284 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- In experimental results, our model outperforms baselines in both quantitative and qualitative evaluations. These results show the potential of our model for real-world applications.", + "bbox": [ + 225, + 286, + 785, + 333 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 215, + 364, + 387, + 382 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Image Inpainting. Text-guided image inpainting, which involves filling in a portion of an image based on input text, is closely related to text-guided image outpainting [4]. Existing image inpainting methods [2, 5, 17, 18, 22, 29] include models based on GANs and diffusion-based methods. Recently, various works [2, 8, 18, 22] have focused on enhancing inpainting capabilities across general domains with diffusion models. Stable Diffusion Inpainting [22], Blended-Latent Diffusion [2] and PowerPaint [31] involve taking an image and a mask as input and then filling in the image based on the text. These studies effectively edit the masked portions of given images from text, understanding the content well.", + "bbox": [ + 212, + 398, + 787, + 547 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Image Outpainting. There are various studies [4, 7, 11, 14, 25, 27] aimed at infinitely expanding images. InfinityGAN [14], a GAN-based model, proposes a method for generating arbitrarily sized images unconditionally. This approach is trained on landscape image dataset aiming to capture both local and global consistency while generate realistic arbitrarily sized images without repetitive patterns. Additionally, InOut [4], which uses GAN inversion for image outpainting, avoids the need of sequential outpainting. While previous models [4, 12-14] have attempted to address the challenging task of image outpainting, the lack of high-resolution text-image paired dataset still leads these methods to focus on limited categories, such as natural landscapes.", + "bbox": [ + 212, + 550, + 787, + 702 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Text-guided Image Outpainting. The task of arbitrarily extending images from text is more challenging than unconditional image outpainting due to the scarcity of datasets and the difficulty of maintaining global and local consistency. Nuwa-Infinity [13] successfully performs text-guided image outpainting in an autoregressive manner. However, due to the lack of high-resolution datasets containing rich content, Nuwa-Infinity, like previous studies [4, 12, 14], performs text-guided image outpainting on limited datasets [4, 30] such as nature landscapes. To the best of our knowledge, we are the first to arbitrarily expand images from general text using LLM and diffusion model in a zero-shot manner.", + "bbox": [ + 212, + 704, + 787, + 839 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/9678735c1a6c9b7a4afc25f2c1dfb9773f96a111562880d55a9e52bd58e01cc6.jpg", + "image_caption": [ + "Fig. 1: Global caption generation with LLM for training. To address the lack of text-image paired datasets with high resolution images that have rich context, we generate our global caption from local image captions using the LLM." + ], + "image_footnote": [], + "bbox": [ + 225, + 146, + 776, + 242 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/37b2566e8eea584991ba74b5310581449fc6ecab2caa0102683f05dac350b78f.jpg", + "image_caption": [ + "Fig. 2: Model architecture. We fine-tune the diffusion model [22] using local masked image as input, conditioned on the $W$ vector. Green boxes are trainable networks. Blue boxes are frozen networks." + ], + "image_footnote": [], + "bbox": [ + 225, + 305, + 777, + 424 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 214, + 501, + 330, + 517 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In the training stage, we train our model conditioned on a global caption, local caption, and visual features. In the inference stage, we expand the given image conditioned on the global caption, generated local caption and the visual feature. Through this approach, our model is able to perform the text-guided image outpainting task without high-resolution text-image paired datasets.", + "bbox": [ + 212, + 521, + 787, + 598 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Global Caption Generation for Training", + "text_level": 1, + "bbox": [ + 214, + 618, + 591, + 635 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To train the model without a high-resolution text-image paired dataset, we generate imaginary global captions describing the expanded image based on the local captions using the LLM in training step. We consider a $512 \\times 512$ resolution image as a local image, and an annotated caption of the image as a local caption. We generate a global caption that depicts diverse contexts from the annotated caption by leveraging the LLM. To generate a global caption, we follow two steps. Firstly, using an annotated caption as a local caption, we create imaginary local captions that describe the surroundings of the given image by using the LLM. As seen in Figure 1, in the stage (a), we input an annotated caption, \"A boy and a girl playing on the beach.\", to the LLM with the instruction, \"Imagine caption for what happen outside of these caption without sound\". Then the LLM generates several local captions following the content of the given caption, such as \"A loving couple meanders along the sandy shores of the beach, basking in the serene", + "bbox": [ + 212, + 643, + 787, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/dc7f78deedd0875f0bb0581dea4be1f8012fa2131f33861f7dce0963846b6cc5.jpg", + "image_caption": [ + "GT: Two bicycles are standing behind two people sitting on the grass near a body of water." + ], + "image_footnote": [], + "bbox": [ + 264, + 156, + 429, + 297 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/be35b199f4c2d7b55118f56a674e0a3236d5e06ff7f2783bfd01273d26ad12a0.jpg", + "image_caption": [ + "Fig.3:Masked image generation. We mask the images in four directions: top,bottom,left,and right.", + "Fig. 4: Local caption generation during inference. Using the input image and the instruction, the LLM generates an imaginary local caption." + ], + "image_footnote": [], + "bbox": [ + 527, + 143, + 777, + 309 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "ambiance.\" These generated local captions depict various local contexts within the expanded image by imagining the scene outside of the given local image. Next, in the stage (b), we create a global caption by summarizing the annotated caption and the generated local captions. Using the instruction, \"Summarize the captions\", we generate a global caption, \"A beach scene with a couple strolling, playful children and a dog, people exploring shops, and two kids enjoying the sand.\"", + "bbox": [ + 212, + 398, + 787, + 502 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The global caption summarizes an annotated caption and a variety of imaginary local captions, thereby acquiring the global context of the image that is expanded from the local image. Also we empirically found that this two-step process can generate a global caption with more rich contents for the given local image by leveraging the LLM.", + "bbox": [ + 212, + 503, + 787, + 580 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Training Pipeline", + "text_level": 1, + "bbox": [ + 214, + 603, + 405, + 618 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To expand images from general text, we fine-tune a pre-trained Stable Diffusion model [22]. As shown in Figure 3, first, we take local masked images $M_{l}$ , each masked on the top, bottom, left, and right.", + "bbox": [ + 212, + 628, + 785, + 672 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To maintain spatial information and global visual consistency of the images generated thus far, we input a generated global image $G_{i}$ to the CLIP [20] vision encoder to extract visual feature $E_{i}$ . Since there is no high-resolution image available in the training step, we use an unmasked area of the local masked image $M_{l}$ as the generated global image $G_{i}$ . Also, as shown in Figure 2 and Equation 1, we concatenate the embeddings $E_{g}$ of global caption $P_{g}$ with embeddings $E_{l}$ of local captions $P_{l}$ . Then we extract the fused textual feature by compressing the concatenated vector through a Multi-Layer Perceptron (MLP) composed of two linear layers. As we fine-tune our model conditioned on the compressed textual feature, our model can reflect both global and local contexts when generating images.", + "bbox": [ + 212, + 674, + 787, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture.", + "bbox": [ + 261, + 147, + 750, + 157 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/023979a56c831df7177a4566ef3346f14dae50534e1120601ea10999df8d4253.jpg", + "image_caption": [ + "Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture.", + "Fig. 5: Inference Pipeline. We expand the local image autoregressively by conditioning on the global caption, local caption generated by the LLM and the visual feature. The figure image is generated with a 16-step process $(4608 \\times 512)$ . The red box is a local masked image, and the blue box is an expanded global image that is input into the CLIP image encoder." + ], + "image_footnote": [], + "bbox": [ + 230, + 157, + 776, + 476 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nE _ {t} = M L P \\left(E _ {g}, E _ {l}\\right), \\quad W = C o n c a t \\left(E _ {i}, E _ {t}\\right) \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 343, + 595, + 785, + 613 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To consider both textual and visual information effectively, we expand the cross-attention dimension of the U-Net in the pre-trained Stable Diffusion model [2]. After matching the dimension of the visual feature $E_{i}$ ( $77 \\times 768$ ) with the textual feature $E_{t}$ ( $77 \\times 768$ ), we concatenate them to create the $W$ vector ( $154 \\times 768$ ). Then we apply it as cross-attention to the U-Net. We train our model end-to-end using MSE loss, following Stable Diffusion [22]. We provide detail in the supplementary material.", + "bbox": [ + 212, + 616, + 787, + 722 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Through this method, we train our model to expand the given local image to represent various contexts while maintaining visual consistency, by conditioning on the global caption, local caption, and visual features.", + "bbox": [ + 212, + 723, + 787, + 767 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3 Inference Pipeline", + "text_level": 1, + "bbox": [ + 214, + 787, + 413, + 803 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We perform inference as shown in Figure 5. First, a local image and a global caption are inputted. We then apply a mask to the image in the direction of the", + "bbox": [ + 212, + 809, + 785, + 839 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "desired expansion to expand this image. And then, we generate an imaginary local caption with the LLM to fill in the local masked image. Figure 4 illustrates the process of generating an imaginary local caption. We input a local image and the instruction \"Create a short sentence outside of the given image to expand this image to the left.\" into the LLM to generate the local caption. By providing the expanding direction with the instruction, the LLM can effectively imagine the local caption which describes the scene surrounding the given local image.", + "bbox": [ + 212, + 146, + 787, + 252 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Next, we shift the local masked image autoregressively. To expand a local image that incorporates the details of the local caption while considering the global semantic context, we use both the global and local captions as text condition. After extracting the embeddings of these captions, we concatenate the vectors. Then we input the vector into the MLP layer. By compressing the vector, we extract the textual feature from global and local captions, $E_{t}$ ( $77 \\times 768$ ). Additionally, to maintain visual consistency and understand the spatial information of the previously generated image, we use the CLIP image embedding of the generated global image as the visual feature, $E_{i}$ ( $77 \\times 768$ ). Then we create a conditioning vector, $W$ ( $154 \\times 768$ ) by concatenating both textual and visual features. Our model expands an image with each step conditioning on the vector, $W$ , with an expanded cross-attention dimension ( $154 \\times 768$ ). This enables us to generate an output image by considering on the textual and visual features. Also we can arbitrarily extend the input local image in an autoregressive manner while maintaining global coherence and local consistency.", + "bbox": [ + 212, + 252, + 789, + 479 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4 Experiment", + "text_level": 1, + "bbox": [ + 214, + 500, + 366, + 517 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1 Experimental Setup", + "text_level": 1, + "bbox": [ + 214, + 530, + 428, + 545 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Implementation detail. We use 100,000 text-image pairs from the MS-COCO [15] dataset. We construct global captions on MS-COCO [15] using GPT 3.5 [3] following the Section 3.1. We fine-tune Stable Diffusion 1.5 [22] for 25 epochs with a batch size of 20, using two NVIDIA A100 GPUs. We use LLAVA 1.6 [16] to generate the local captions during the inference. We provide the training dataset examples to the supplementary material.", + "bbox": [ + 212, + 551, + 807, + 643 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Baselines. Since we focus on text-guided infinite image synthesis in zero-shot manner, it is challenging to select the baseline models. For example, previous models [4, 12-14], such as InfinityGAN [14] performs the unconditional image outpainting and NuWA-Infinity [13] is mainly focused on the limited categories such as natural landscapes. Also as NuWA-Infinity [13] require high resolution training dataset and do not provide the official code, we cannot compare with it. Therefore, we compare our model with the text-guided inpainting models such as SD Inpainting model [22], Blended Latent Diffusion [2] and PowerPaint [31] which can be applied to text-guided image outpainting, and for which pre-trained models are available. We use only global caption as the text condition for the baselines with the same masking setting as ours.", + "bbox": [ + 212, + 643, + 787, + 809 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Evaluation Datasets. To evaluate the text-guided image outpainting performance, we utilize image captioning datasets, MS-COCO [15], Flickr 8k [10] and", + "bbox": [ + 214, + 809, + 787, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/73ecaf89c45671c73e8f7854cc8598f2e4ae35cc7fd52f1abcdd6788bd9c8dd2.jpg", + "table_caption": [ + "Table 1: Quantitative evaluations with baselines. $\\times 4$ corresponds to the image being expanded four times, and $\\times 8$ corresponds to the image being expanded eight times." + ], + "table_footnote": [], + "table_body": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SD Inp [22]14.3127.4111.0328.3714.5327.628.5527.416.2528.378.8827.62
BLD [2]11.8827.7310.7828.8212.7927.966.3927.736.8628.828.1127.96
PP [31]12.9127.429.7528.379.8827.637.3727.426.0128.377.1527.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
", + "bbox": [ + 217, + 184, + 782, + 267 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "UIUC Pascal [21], which are text-image paired datasets with various context. We randomly use 1,000 text-image pair samples for our evaluation on each datasets. We divided dataset into four equal parts, each comprising $25\\%$ of the data, and applied masking as shown in Figure 3: top, bottom, left, and right. To generate a global caption, we use GPT-3.5 [3] based on the annotated caption, as described in Section 3.1.", + "bbox": [ + 212, + 273, + 785, + 364 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Evaluation Metrics. We compare our model with the baselines using CLIP-SIM [20] (average CLIP similarity between entire expanded image and global caption), and Inception score (IS) [23] as evaluation metrics. We are unable to use FID and KID evaluation metrics because we do not have the ground truth images for the extended images.", + "bbox": [ + 212, + 364, + 785, + 441 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2 Quantitative Result", + "text_level": 1, + "bbox": [ + 215, + 459, + 426, + 474 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To evaluate the performance of our model, we compare our model with SD Inpainting model (SD Inp) [22], Blended Latent Diffusion (BLD) [2] and PowerPaint (PP) [31] on three datasets [10, 15, 21].", + "bbox": [ + 212, + 481, + 785, + 527 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Image Extension $\\times 4$ experiment. We expand the image four times, and the resolution of the expanded image is $1536 \\times 512$ or $512 \\times 1536$ . As shown in Table 1, our model outperforms the baselines [2,22,31] in terms of IS [23] and CLIPSIM [20]. Since our model expands an image conditioned on a local caption generated by LLM, which represents the details within a global caption, the expanded image is faithful to the global caption while preserving its contextual coherence. However, the baseline models repetitively expand images and do not contain the rich context beyond the global caption.", + "bbox": [ + 212, + 527, + 785, + 648 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Image Extension $\\times 8$ experiment. We expand the image eight times, and the resolution of the expanded image is $2560 \\times 512$ or $512 \\times 2560$ . As shown in Table 1, our model shows better performance than the baseline models in IS [23] and CLIPSIM [20]. These results show that our model can maintain visual quality and global coherence while generating images with a more diverse context as it extends more images.", + "bbox": [ + 212, + 648, + 785, + 739 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3 Qualitative Analysis", + "text_level": 1, + "bbox": [ + 215, + 757, + 429, + 773 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We qualitatively analyze the generated results of our model and baselines, specifically focusing on the aspects, \"text matching\", \"image quality\", and \"global coherence\". Also we provide more generated samples with larger resolutions in the supplementary material.", + "bbox": [ + 212, + 779, + 785, + 840 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/78414fa6bbf36391ba4dbfa8b074a2abf3de4f19f3c4834a0ea043de94ff5972.jpg", + "image_caption": [ + "Fig. 6: Comparison of generated image results. We expand the image eight times. The expanded image has a resolution of $512 \\times 2560$ or $2560 \\times 512$ . The red box is the given local image. We provide more samples in the supplementary material." + ], + "image_footnote": [], + "bbox": [ + 220, + 146, + 485, + 579 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/5e1ccdf156c814f162c2817fc1d442b37a9ccd00c2dd371dbf49244a89c2a82a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 148, + 767, + 580 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(i) Text Matching. It is important for the expanded image to follow the context of the given global caption without repetitive patterns. According to Figure 6 (e), our model generates objects that match the content of the global caption, such as \"traffic lights\", \"wires\" and \"building\" in a harmonious manner. It extends into one consistent image that matches the global caption. However, the baselines either reflect only partial objects mentioned in the global caption or fail to match the expanded overall image with the global caption by generating repetitive images. These results show that our model can generate an expanded image maintaining global visual consistency while successfully capturing the textual context of the global caption, compared to our baselines.", + "(ii) Image Quality. As shown in Figure 6, when expanding the image, our model shows the ability to generate clear objects in the intended direction of expansion. In contrast, the baselines [2, 22, 31] often generate blurred or indis" + ], + "bbox": [ + 212, + 642, + 787, + 839 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 284, + 114, + 732, + 128 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/18b7fd86ed104159e1b6ad503a4a84e79e76ffacbf6d59c725a92544fbc70cab.jpg", + "table_caption": [ + "Table 2: Human evaluation with baselines. Each cell lists the winning percentage of our model versus baselines. TM is \"text matching\". IQ is \"image quality\". GC is \"global coherence\". We report only our winning percentages and omit LOSS and TIE due to space." + ], + "table_footnote": [], + "table_body": "
MethodExpand × 4
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]65.0071.2075.4063.0063.4075.2063.4062.2074.20
BLD [2]71.6073.0078.4071.4070.8077.0073.2069.8076.40
PP [31]71.2074.4075.0078.1073.9073.0073.8068.0070.20
MethodExpand × 8
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]70.4075.2077.8069.2069.4078.4068.2068.8076.20
BLD [2]74.6077.0080.2076.1077.3080.9075.9073.4079.10
PP [31]76.4076.2074.0078.4075.0072.0075.8076.2075.20
", + "bbox": [ + 305, + 210, + 689, + 351 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "tinct objects. For instance, as depicted in Figure 6 (a), the image expanded by SD Inp [22] shows variations in the human form with each expansion, and the shapes of objects are not clear. Also, in the case of BLD [2], the objects of expanded image have distinct colors, but shapes such as bicycles and human in the image remain indistinct. These results show that our model exhibits better image quality compared to existing models when expanding images.", + "bbox": [ + 212, + 369, + 787, + 462 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "(iii) Global Coherence. When expanding images, it is crucial to maintain the overall visual consistency of the entire image and avoid the repetitive patterns. According to Figure 6, our model expands the images exhibiting overall harmony while encompassing a variety of content. However, in the case of the baselines, repetitive patterns are present, and it fails to maintain the overall positioning or global consistency of the image. In the Figure 6 (d), our model maintains overall harmony and generates objects reflecting the expansion of the image. However, the baselines repetitively generate \"tennis players\" or \"audiences\" without maintaining the positioning or global consistency of the expanded image. These results demonstrate that our model better reflects global consistency and overall harmony compared to the baselines when expanding images.", + "bbox": [ + 212, + 462, + 787, + 628 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.4 Human Evaluation", + "text_level": 1, + "bbox": [ + 214, + 640, + 418, + 654 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Because the evaluation metrics may not perfectly measure the performance of our model, we conduct a human evaluation on Amazon Mechanical Turk (AMT). For human evaluation, we randomly sample 100 generated images from each of MS-COCO [15], Flickr 8k [10], and Pascal [21] test sets, in total 300 samples. We conduct three surveys with 5 participants to compare our model with the baselines in the aspect of the text matching (TM), image quality (IQ) and global coherence (GC).", + "bbox": [ + 212, + 672, + 787, + 777 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Image Extension $\\times 4$ experiment. Table 2 shows the results of human evaluation on image expansion $\\times 4$ . participants significantly preferred our model in terms of text matching and image quality. From a global coherence aspect, our model outperformed the baselines by a large margin. These results demonstrate", + "bbox": [ + 212, + 779, + 787, + 840 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/1bd89895a68a486b2a5004c90f56a921223579661078b724a79f3508c517d5bb.jpg", + "table_caption": [ + "Table 3: Quantitative evaluations with ablation models. $\\times 4$ corresponds to the image being expanded four times, and $\\times 8$ corresponds to the image being expanded eight times." + ], + "table_footnote": [], + "table_body": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
w/o All14.6727.4010.9028.3710.6627.628.3727.426.0428.377.1427.62
w/o CLIP14.2627.5310.8028.7013.5527.748.0327.537.0628.708.3727.74
w/o LLM14.8327.4310.4428.3913.8227.639.0427.436.5928.398.8427.63
w/o GC15.5227.4211.0228.3710.5127.629.4727.426.5028.377.2727.62
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
", + "bbox": [ + 245, + 198, + 750, + 290 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/93d57ca9ee0b10927d5fc4b79bd0e14868dc6241781c782aaf4d1c59e34ddda3.jpg", + "table_caption": [ + "Table 4: Quantitative evaluations with baselines with the LLM. We compare with baselines with local captions generated by the LLM instead of global captions." + ], + "table_footnote": [], + "table_body": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SDInp w/ LLM [22]13.7427.7011.0128.7713.6827.888.5927.707.1928.778.7927.88
BLD w/ LLM [2]15.7227.418.8328.6110.0627.649.4727.414.9928.616.7527.64
PP w/ LLM [31]12.6527.428.7028.378.5027.637.4727.424.9828.375.6627.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
", + "bbox": [ + 217, + 337, + 777, + 420 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "that our model reflects text alignment, image quality and visual consistency much better than the baselines.", + "bbox": [ + 212, + 441, + 784, + 470 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Image Extension $\\times 8$ experiment. Table 2 shows the results of human evaluation on image expansion $\\times 8$ : similar to the human evaluation of image extension $\\times 4$ , participants significantly preferred our model by a substantial margin. Furthermore, the number of participants who preferred our model was higher in extension $\\times 8$ than in extension $\\times 4$ . These results indicate that as images are expanded, our model show better performance than the baseline in all aspects.", + "bbox": [ + 212, + 473, + 787, + 564 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.5 Ablation Study", + "text_level": 1, + "bbox": [ + 214, + 590, + 390, + 606 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "To explore the impact of the proposed components, we conduct an ablation study with different models. Also we provide the human evaluation results in the supplementary material, which show that our model is preferred than ablated models. All experimental settings are the same as in Section 4.1 and Section 4.4.", + "bbox": [ + 212, + 619, + 787, + 681 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Effect of the LLM guidance and CLIP visual feature. To see the effect of the LLM guidance and CLIP visual feature, we compare our model with the w/o all model which generates an image with only a global caption. In Figure 7, the w/o all model simply reflects the keywords of the global caption, while failing to maintain global consistency and diverse context. This indicates that the w/o all model expands an image repetitively that depicts the same content without considering the overall structure. As shown in Table 3, our model outperforms the w/o all model in both IS [23] and CLIPSIM [20]. This indicates that our model can expand image better than the w/o all model in aspect of image quality and text faithfulness.", + "bbox": [ + 212, + 705, + 787, + 857 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/f177fbf09a5aecbd121fa9a122761322b5a00ded4cc682457c311f9fc66592c0.jpg", + "image_caption": [ + "Fig. 7: Comparison of generated image results between our ablation models. We expand the image eight times. The expanded image has a resolution of $512 \\times 2560$ or $2560 \\times 512$ . The red box is the given local image." + ], + "image_footnote": [], + "bbox": [ + 218, + 145, + 495, + 585 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/862953f928cfd9119bacf372a474ddef5f1225f74f9d8a30fb21a17f6cca2352.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 145, + 784, + 585 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Effect of the local caption with LLM guidance. We compare our model with the w/o LLM model which generates an image with a global caption and the CLIP visual feature. In Figure 7, the w/o LLM model fails to incorporate content beyond the global caption since it is conditioned only on the global caption as a textual condition. Also, the extended image does not appear as a single image but rather as a collage of the images. For example, in Figure 7 (d), our model expands the image by imagining the full view of the \"baseball stadium with spectators\" whereas the w/o LLM model extends the image by repeating the \"baseball game\" image. In Table 3, our model outperforms the w/o LLM model in both IS [23] and CLIPSIM [20]. This shows that our model can expand image with better quality and text faithfulness comparing to the w/o LLM model.", + "bbox": [ + 212, + 672, + 787, + 854 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 126 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/e809381daca1a35b3b8673d04f5259ec5ff678380877a2a438b4cc5cece62590.jpg", + "table_caption": [ + "Table 5: Quantitative evaluations with different architectures on MS-COCO dataset. The All in MLP model gets all conditions through cross-attention using a compressed vector by the MLP $(77\\times 768)$ . The All in cross-attention model gets all conditions directly through cross-attention $(231\\times 768)$ . Our model gets the textual condition, a vector compressed by the MLP, and the visual condition through cross-attention $(154\\times 768)$ ." + ], + "table_footnote": [], + "table_body": "
Expand × 4Expand × 8
ISCLIPISCLIP
All in MLP15.5727.519.1127.51
All in cross attention15.0227.429.7527.42
Ours16.0527.949.9727.94
", + "bbox": [ + 217, + 306, + 498, + 364 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/5425224a7a6ed8b72d5410c38bd1e46dee4a3dd506b9be88d1959b54e5f825b3.jpg", + "image_caption": [ + "Fig. 8: Qualitative evaluations with different architectures The red box is the given local image." + ], + "image_footnote": [], + "bbox": [ + 524, + 145, + 777, + 349 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Effect of the CLIP visual feature. We compare our model with the w/o CLIP model which generates an image with a global caption and a local caption generated with the LLM. In Figure 7, comparing with our model, the w/o CLIP model often generates images with slightly lower image quality and global consistency, as it does not consider the visual feature of the overall expanded image. Figure 7 shows that the w/o CLIP model is unable to enhance the image while maintaining visual coherence. In Table 3, our model outperforms the w/o CLIP model in terms of the IS. This demonstrates that the CLIP visual feature helps the model to generate an image with better image quality. Also for CLIPSIM [20], even though the w/o CLIP model is conditioned on both global and local captions, our model generates an image that closely matches with the global caption.", + "bbox": [ + 212, + 417, + 787, + 599 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Effect of the global caption. We compare our model with the w/o GC model which generates an image with a local caption generated with the LLM and CLIP visual feature. Figure 7 shows that, in comparison to our model, the w/o GC model generates images that do not maintain global consistency well. Also, since it does not consider the global context of the expanded image, the expanded images fail to maintain overall harmony. In Table 3, our model outperforms the w/o GC model in terms of IS and CLIPSIM. This demonstrates that the our model can generate images that maintain global consistency by effectively reflecting the global caption.", + "bbox": [ + 212, + 604, + 787, + 742 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Effect of mask ratio. To explore various masking behaviors, we train our model on the dataset with a masking ratio of 3:1. As shown in Figure 8 (c), we found that although we can generate more content at once, it becomes more challenging to maintain global consistency when the provided(unmasked) input content gets smaller. This result demonstrates that our mask ratio is effective.", + "bbox": [ + 212, + 763, + 787, + 839 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Effect of LLM guidance for baselines. Our proposed method can effectively expand an image using both the LLM and the diffusion model. To explore its effectiveness, we compare our model with the baselines using local captions generated by the LLM instead of global captions. Table 4 shows that our model outperforms the baselines with the LLM. These results demonstrate the effectiveness of our architecture for this task, enhanced by the guidance of the LLM.", + "bbox": [ + 212, + 146, + 787, + 238 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "4.6 Exploring Other Model Architectures", + "text_level": 1, + "bbox": [ + 215, + 270, + 571, + 287 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "We explore the effect of our model architecture by comparing with two alternative model architectures: 1) In the all-in MLP model, we compress the global caption, local caption and CLIP visual feature by the MLP layer, as a compressed vector $(77 \\times 768)$ then the model generates an image conditioned on the vector. 2) In the all-in cross attention model, we concatenate the global caption, local caption and CLIP visual feature $(231 \\times 768)$ then the model generates an image conditioned on the concatenated vector through the expanded U-Net.", + "bbox": [ + 212, + 306, + 787, + 411 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In Figure 8 (a), the all-in MLP model produces images with blurred edges and indistinct objects, likely due to difficulty in representing both textual and visual features. Figure 8 (b) shows the all-in cross-attention model generating repetitive \"berry\" images, possibly influenced by textual content. In Figure 8 (c), our model achieves semantic and visual consistency with both global and local captions.", + "bbox": [ + 212, + 414, + 787, + 503 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In Table 5, our model performs better than the all-in MLP and all-in cross-attention model in both IS [23] and CLIPSIM [20]. This shows that our model architecture can reflect the content of text and visual features effectively.", + "bbox": [ + 212, + 506, + 787, + 551 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion and Limitation", + "text_level": 1, + "bbox": [ + 215, + 585, + 509, + 603 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In this work, we propose a novel zero-shot text-guided image outpainting model by addressing the two main challenges: 1) the lack of high-resolution text-image paired datasets that have rich context; 2) preserving global coherence and understanding the context. In contrast to prior research, which generates images in limited categories, we leverage the LLMs to imagine the outside scene of the given image. During inference, we utilize LLMs to generate imaginary prompts to expand images. This allows us to expand the image to arbitrary size with diverse contexts. Additionally, by conditioning on the visual context, we can maintain global consistency and spatial local context. The experimental results demonstrate that our model can extend images arbitrarily in a zero-shot manner, and it offers promising opportunities for text-guided image outpainting approaches. Our model has a limitation as it relies on a pre-trained text-to-image model, but the generated images can contain rich visual contents. For future work, we will expand to image outpainting through stories or other modalities, such as sound.", + "bbox": [ + 212, + 628, + 787, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 217, + 143, + 401, + 162 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00608, Artificial intelligence research about multi-modal interactions for empathetic conversations with humans & No.RS-2020-II201336, Artificial Intelligence graduate school support(UNIST)) and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00219959).", + "bbox": [ + 212, + 176, + 787, + 282 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 306, + 321, + 321 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al.: Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems 35, 23716-23736 (2022)", + "2. Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18208-18218 (2022)", + "3. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020)", + "4. Cheng, Y.C., Lin, C.H., Lee, H.Y., Ren, J., Tulyakov, S., Yang, M.H.: Inout: Diverse image outpainting via gan inversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11431-11440 (2022)", + "5. Demir, U., Unal, G.: Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:1803.07422 (2018)", + "6. Ding, Z., Zhang, M., Wu, J., Tu, Z.: Patched denoising diffusion models for high-resolution image synthesis. In: The Twelfth International Conference on Learning Representations (2023)", + "7. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: Proceedings of the seventh IEEE international conference on computer vision. vol. 2, pp. 1033-1038. IEEE (1999)", + "8. Esser, P., Rombach, R., Blattmann, A., Ommer, B.: Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in neural information processing systems 34, 3518-3532 (2021)", + "9. Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.: From images to textual prompts: Zero-shot visual question answering with frozen large language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10867-10877 (2023)", + "10. Hodosh, M., Young, P., Hockenmaier, J.: Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research 47, 853-899 (2013)", + "11. Kopf, J., Kienzle, W., Drucker, S., Kang, S.B.: Quality prediction for image completion. ACM Transactions on Graphics (ToG) 31(6), 1-8 (2012)", + "12. Li, Z., Wang, Q., Snavely, N., Kanazawa, A.: Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In: European Conference on Computer Vision. pp. 515-534. Springer (2022)" + ], + "bbox": [ + 218, + 339, + 785, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 785, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "13. Liang, J., Wu, C., Hu, X., Gan, Z., Wang, J., Wang, L., Liu, Z., Fang, Y., Duan, N.: Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. Advances in Neural Information Processing Systems 35, 15420-15432 (2022)", + "14. Lin, C.H., Lee, H.Y., Cheng, Y.C., Tulyakov, S., Yang, M.H.: Infinitygan: Towards infinite-pixel image synthesis. arXiv preprint arXiv:2104.03963 (2021)", + "15. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. pp. 740-755. Springer (2014)", + "16. Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. Advances in neural information processing systems 36 (2024)", + "17. Liu, H., Wan, Z., Huang, W., Song, Y., Han, X., Liao, J.: Pd-gan: Probabilistic diverse gan for image inpainting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 9371-9381 (2021)", + "18. Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021)", + "19. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)", + "20. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021)", + "21. Rashtchian, C., Young, P., Hodosh, M., Hockenmaier, J.: Collecting image annotations using amazon's mechanical turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's Mechanical Turk. pp. 139-147 (2010)", + "22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10684-10695 (2022)", + "23. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. Advances in neural information processing systems 29 (2016)", + "24. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278-25294 (2022)", + "25. Sivic, J., Kaneva, B., Torralba, A., Avidan, S., Freeman, W.T.: Creating and exploring a large photorealistic virtual space. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. pp. 1-8. IEEE (2008)", + "26. Tsimpoukelli, M., Menick, J.L., Cabi, S., Eslami, S.M.A., Vinyals, O., Hill, F.: Multimodal few-shot learning with frozen language models. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems. vol. 34, pp. 200-212. Curran Associates, Inc. (2021), https://proceedings.neurips.cc/paper_files/paper/2021/file/01b7575c38dac42f3cbf7d500438b875-Paper.pdf" + ], + "bbox": [ + 215, + 146, + 785, + 840 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Kwon et al.", + "bbox": [ + 271, + 114, + 349, + 127 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "27. Wang, M., Lai, Y.K., Liang, Y., Martin, R.R., Hu, S.M.: Biggerpicture: data-driven image extrapolation using graph matching. ACM Transactions on Graphics 33(6) (2014)", + "28. Yang, Z., Gan, Z., Wang, J., Hu, X., Lu, Y., Liu, Z., Wang, L.: An empirical study of gpt-3 for few-shot knowledge-based vqa. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 3081-3089 (2022)", + "29. Yildirim, A.B., Pehlivan, H., Bilecen, B.B., Dundar, A.: Diverse inpainting and editing with gan inversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 23120-23130 (2023)", + "30. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1452-1464 (2017)", + "31. Zhuang, J., Zeng, Y., Liu, W., Yuan, C., Chen, K.: A task is worth one word: Learning with task prompts for high-quality versatile image inpainting. arXiv preprint arXiv:2312.03594 (2023)" + ], + "bbox": [ + 215, + 146, + 787, + 354 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 116, + 785, + 126 + ], + "page_idx": 16 + } +] \ No newline at end of file diff --git a/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_model.json b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c1ddbea14e051b24c2163e8575196863dedcfea1 --- /dev/null +++ b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_model.json @@ -0,0 +1,1972 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.235, + 0.14, + 0.77, + 0.187 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.213, + 0.692, + 0.228 + ], + "angle": 0, + "content": "Soyeong Kwon*, Taegyeong Lee*, and Taehwan Kim" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.24, + 0.692, + 0.269 + ], + "angle": 0, + "content": "Artificial Intelligence Graduate School, UNIST {soyoung17, taegyeonglee, taehwankim}@unist.ac.kr" + }, + { + "type": "text", + "bbox": [ + 0.263, + 0.305, + 0.74, + 0.581 + ], + "angle": 0, + "content": "Abstract. Text-guided image editing and generation methods have diverse real-world applications. However, text-guided infinite image synthesis faces several challenges. First, there is a lack of text-image paired datasets with high-resolution and contextual diversity. Second, expanding images based on text requires global coherence and rich local context understanding. Previous studies have mainly focused on limited categories, such as natural landscapes, and also required to train on high-resolution images with paired text. To address these challenges, we propose a novel approach utilizing Large Language Models (LLMs) for both global coherence and local context understanding, without any high-resolution text-image paired training dataset. We train the diffusion model to expand an image conditioned on global and local captions generated from the LLM and visual feature. At the inference stage, given an image and a global caption, we use the LLM to generate a next local caption to expand the input image. Then, we expand the image using the global caption, generated local caption and the visual feature to consider global consistency and spatial local context. In experiments, our model outperforms the baselines both quantitatively and qualitatively. Furthermore, our model demonstrates the capability of text-guided arbitrary-sized image generation in zero-shot manner with LLM guidance." + }, + { + "type": "text", + "bbox": [ + 0.263, + 0.595, + 0.74, + 0.623 + ], + "angle": 0, + "content": "Keywords: Image outpainting \\(\\cdot\\) Large language models (LLMs) \\(\\cdot\\) Diffusion models" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.649, + 0.377, + 0.665 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.68, + 0.788, + 0.817 + ], + "angle": 0, + "content": "Recently the field of image generation has witnessed a significant advancement in synthesizing high-resolution images from text inputs. However, the existing studies [6,13,14,19] face difficulties in generating arbitrary-size image from text with diverse context because of the following challenges. Firstly, there is a lack of high-resolution text-image paired datasets with diverse contexts. Several high-resolution images [24] may not include rich context since most of them are online shopping product photos or individual portraits. Secondly, it is not just about repetitive expansion; it is essential to expand image depicting rich content based on given text description, while maintaining visual consistency [14]. Most prior" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.825, + 0.625, + 0.841 + ], + "angle": 0, + "content": "* Equal contributions (alphabetically ordered by last name.)" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.268 + ], + "angle": 0, + "content": "research [4,13,14] has focused on datasets [4,30] within limited categories, such as natural landscapes. Nevertheless, in the real world, it is desirable to depict the detailed surroundings beyond a given image, guided by textual descriptions, while ensuring visual consistency with the overall context. Therefore, unlike prior image outpainting models [4,7,11-14,25] that focus on limited datasets or unconditional image outpainting, we address this issue in a zero-shot manner by shifting the image autoregressively based on diverse contexts utilizing Large Language Models (LLMs)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.268, + 0.788, + 0.359 + ], + "angle": 0, + "content": "Recent research [1,9,26,28] has demonstrated that LLMs can perform multimodal tasks, while understanding the visual content as text descriptions. Furthermore, as illustrated in Figure 1, we empirically find that LLMs are able to describe (and thus imagine) the scene beyond the image in text, using only the image captions. This shows that, with the LLMs, image captioning datasets can encompass diverse contexts extending beyond its resolution." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.359, + 0.788, + 0.435 + ], + "angle": 0, + "content": "By leveraging the capabilities of the LLMs, we propose a novel approach that can expand an image to arbitrary size without the need for high-resolution, text-image paired datasets. Our model leverages the LLMs to incorporate global contextual information and uses a diffusion model to generate high-quality and coherent images across various contexts." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.435, + 0.788, + 0.572 + ], + "angle": 0, + "content": "To address the lack of high-resolution text-image paired datasets with rich contexts, we utilize the LLMs to generate the captions that describe scenes beyond the image from the existing datasets [10, 15, 21]. We take a two-step process. As depicted in Figure 1 (a), first, we generate imaginary local captions outside of the image from the annotated caption of existing text-image paired datasets. Each of the generated captions describes details about individual unfolding scenes. Next, as shown in Figure 1 (b), we summarize the annotated caption and the generated local captions to create a global caption that describes the surroundings of the image for global and local context consistency." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.572, + 0.788, + 0.647 + ], + "angle": 0, + "content": "The global image caption describes the entire image beyond the local image, while the local captions provide semantic details for filling in the local masked image. We input these captions into our proposed diffusion model [22] as a textual condition to fill in the local masked image while maintaining the global context consistency as illustrated in Figure 2." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.647, + 0.788, + 0.768 + ], + "angle": 0, + "content": "In order to expand images guided by text while considering both global and local contexts, as illustrated in Figure 2, we train our model using global and local captions as textual conditions and CLIP [20] visual features as visual condition, with the local masked image serving as input. We make four local masked images by masking the top, bottom, left, and right sections. During inference, we expand the image gradually, by shifting patch by patch with LLM guidance. We input a generated local image into the LLM and it generates a next local caption in an autoregressive manner for expanding the image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.769, + 0.788, + 0.814 + ], + "angle": 0, + "content": "Experimental results show that our model outperforms the baselines, demonstrating the ability to arbitrarily expand images in a zero-shot manner with text and generate realistic high-resolution images with rich context." + }, + { + "type": "text", + "bbox": [ + 0.24, + 0.818, + 0.569, + 0.832 + ], + "angle": 0, + "content": "In summary, our contributions are as follows:" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.787, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.147, + 0.786, + 0.208 + ], + "angle": 0, + "content": "- To the best of our knowledge, we are first to propose zero-shot text-guided infinite image synthesis without training on high resolution image. We introduce a novel approach with LLM guidance for zero-shot text-guided image outpainting." + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.21, + 0.787, + 0.285 + ], + "angle": 0, + "content": "- We can expand images preserving visual consistency by shifting local masked images in an autoregressive manner. Additionally, we can generate arbitrary-sized images that incorporate diverse contexts with global consistency by conditioning on the global caption and the local caption generated with LLM effectively." + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.287, + 0.787, + 0.334 + ], + "angle": 0, + "content": "- In experimental results, our model outperforms baselines in both quantitative and qualitative evaluations. These results show the potential of our model for real-world applications." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.365, + 0.388, + 0.383 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.399, + 0.788, + 0.548 + ], + "angle": 0, + "content": "Image Inpainting. Text-guided image inpainting, which involves filling in a portion of an image based on input text, is closely related to text-guided image outpainting [4]. Existing image inpainting methods [2, 5, 17, 18, 22, 29] include models based on GANs and diffusion-based methods. Recently, various works [2, 8, 18, 22] have focused on enhancing inpainting capabilities across general domains with diffusion models. Stable Diffusion Inpainting [22], Blended-Latent Diffusion [2] and PowerPaint [31] involve taking an image and a mask as input and then filling in the image based on the text. These studies effectively edit the masked portions of given images from text, understanding the content well." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.551, + 0.788, + 0.703 + ], + "angle": 0, + "content": "Image Outpainting. There are various studies [4, 7, 11, 14, 25, 27] aimed at infinitely expanding images. InfinityGAN [14], a GAN-based model, proposes a method for generating arbitrarily sized images unconditionally. This approach is trained on landscape image dataset aiming to capture both local and global consistency while generate realistic arbitrarily sized images without repetitive patterns. Additionally, InOut [4], which uses GAN inversion for image outpainting, avoids the need of sequential outpainting. While previous models [4, 12-14] have attempted to address the challenging task of image outpainting, the lack of high-resolution text-image paired dataset still leads these methods to focus on limited categories, such as natural landscapes." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Text-guided Image Outpainting. The task of arbitrarily extending images from text is more challenging than unconditional image outpainting due to the scarcity of datasets and the difficulty of maintaining global and local consistency. Nuwa-Infinity [13] successfully performs text-guided image outpainting in an autoregressive manner. However, due to the lack of high-resolution datasets containing rich content, Nuwa-Infinity, like previous studies [4, 12, 14], performs text-guided image outpainting on limited datasets [4, 30] such as nature landscapes. To the best of our knowledge, we are the first to arbitrarily expand images from general text using LLM and diffusion model in a zero-shot manner." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "image", + "bbox": [ + 0.226, + 0.147, + 0.777, + 0.243 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.25, + 0.788, + 0.294 + ], + "angle": 0, + "content": "Fig. 1: Global caption generation with LLM for training. To address the lack of text-image paired datasets with high resolution images that have rich context, we generate our global caption from local image captions using the LLM." + }, + { + "type": "image", + "bbox": [ + 0.226, + 0.306, + 0.779, + 0.425 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.432, + 0.788, + 0.474 + ], + "angle": 0, + "content": "Fig. 2: Model architecture. We fine-tune the diffusion model [22] using local masked image as input, conditioned on the \\(W\\) vector. Green boxes are trainable networks. Blue boxes are frozen networks." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.502, + 0.331, + 0.518 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.522, + 0.788, + 0.599 + ], + "angle": 0, + "content": "In the training stage, we train our model conditioned on a global caption, local caption, and visual features. In the inference stage, we expand the given image conditioned on the global caption, generated local caption and the visual feature. Through this approach, our model is able to perform the text-guided image outpainting task without high-resolution text-image paired datasets." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.619, + 0.593, + 0.636 + ], + "angle": 0, + "content": "3.1 Global Caption Generation for Training" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.789, + 0.842 + ], + "angle": 0, + "content": "To train the model without a high-resolution text-image paired dataset, we generate imaginary global captions describing the expanded image based on the local captions using the LLM in training step. We consider a \\(512 \\times 512\\) resolution image as a local image, and an annotated caption of the image as a local caption. We generate a global caption that depicts diverse contexts from the annotated caption by leveraging the LLM. To generate a global caption, we follow two steps. Firstly, using an annotated caption as a local caption, we create imaginary local captions that describe the surroundings of the given image by using the LLM. As seen in Figure 1, in the stage (a), we input an annotated caption, \"A boy and a girl playing on the beach.\", to the LLM with the instruction, \"Imagine caption for what happen outside of these caption without sound\". Then the LLM generates several local captions following the content of the given caption, such as \"A loving couple meanders along the sandy shores of the beach, basking in the serene" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.266, + 0.157, + 0.43, + 0.299 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.264, + 0.3, + 0.437, + 0.331 + ], + "angle": 0, + "content": "GT: Two bicycles are standing behind two people sitting on the grass near a body of water." + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.346, + 0.486, + 0.388 + ], + "angle": 0, + "content": "Fig.3:Masked image generation. We mask the images in four directions: top,bottom,left,and right." + }, + { + "type": "image", + "bbox": [ + 0.529, + 0.144, + 0.779, + 0.31 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.326, + 0.785, + 0.381 + ], + "angle": 0, + "content": "Fig. 4: Local caption generation during inference. Using the input image and the instruction, the LLM generates an imaginary local caption." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.399, + 0.788, + 0.503 + ], + "angle": 0, + "content": "ambiance.\" These generated local captions depict various local contexts within the expanded image by imagining the scene outside of the given local image. Next, in the stage (b), we create a global caption by summarizing the annotated caption and the generated local captions. Using the instruction, \"Summarize the captions\", we generate a global caption, \"A beach scene with a couple strolling, playful children and a dog, people exploring shops, and two kids enjoying the sand.\"" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.505, + 0.788, + 0.581 + ], + "angle": 0, + "content": "The global caption summarizes an annotated caption and a variety of imaginary local captions, thereby acquiring the global context of the image that is expanded from the local image. Also we empirically found that this two-step process can generate a global caption with more rich contents for the given local image by leveraging the LLM." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.604, + 0.406, + 0.619 + ], + "angle": 0, + "content": "3.2 Training Pipeline" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.629, + 0.787, + 0.674 + ], + "angle": 0, + "content": "To expand images from general text, we fine-tune a pre-trained Stable Diffusion model [22]. As shown in Figure 3, first, we take local masked images \\(M_{l}\\), each masked on the top, bottom, left, and right." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.675, + 0.789, + 0.841 + ], + "angle": 0, + "content": "To maintain spatial information and global visual consistency of the images generated thus far, we input a generated global image \\( G_{i} \\) to the CLIP [20] vision encoder to extract visual feature \\( E_{i} \\). Since there is no high-resolution image available in the training step, we use an unmasked area of the local masked image \\( M_{l} \\) as the generated global image \\( G_{i} \\). Also, as shown in Figure 2 and Equation 1, we concatenate the embeddings \\( E_{g} \\) of global caption \\( P_{g} \\) with embeddings \\( E_{l} \\) of local captions \\( P_{l} \\). Then we extract the fused textual feature by compressing the concatenated vector through a Multi-Layer Perceptron (MLP) composed of two linear layers. As we fine-tune our model conditioned on the compressed textual feature, our model can reflect both global and local contexts when generating images." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.148, + 0.751, + 0.159 + ], + "angle": 0, + "content": "Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture." + }, + { + "type": "image", + "bbox": [ + 0.231, + 0.159, + 0.777, + 0.477 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.264, + 0.478, + 0.751, + 0.489 + ], + "angle": 0, + "content": "Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture." + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.503, + 0.789, + 0.573 + ], + "angle": 0, + "content": "Fig. 5: Inference Pipeline. We expand the local image autoregressively by conditioning on the global caption, local caption generated by the LLM and the visual feature. The figure image is generated with a 16-step process \\((4608 \\times 512)\\). The red box is a local masked image, and the blue box is an expanded global image that is input into the CLIP image encoder." + }, + { + "type": "equation", + "bbox": [ + 0.344, + 0.597, + 0.787, + 0.614 + ], + "angle": 0, + "content": "\\[\nE _ {t} = M L P \\left(E _ {g}, E _ {l}\\right), \\quad W = C o n c a t \\left(E _ {i}, E _ {t}\\right) \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.617, + 0.788, + 0.723 + ], + "angle": 0, + "content": "To consider both textual and visual information effectively, we expand the cross-attention dimension of the U-Net in the pre-trained Stable Diffusion model [2]. After matching the dimension of the visual feature \\( E_{i} \\) (\\( 77 \\times 768 \\)) with the textual feature \\( E_{t} \\) (\\( 77 \\times 768 \\)), we concatenate them to create the \\( W \\) vector (\\( 154 \\times 768 \\)). Then we apply it as cross-attention to the U-Net. We train our model end-to-end using MSE loss, following Stable Diffusion [22]. We provide detail in the supplementary material." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.724, + 0.788, + 0.768 + ], + "angle": 0, + "content": "Through this method, we train our model to expand the given local image to represent various contexts while maintaining visual consistency, by conditioning on the global caption, local caption, and visual features." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.788, + 0.414, + 0.804 + ], + "angle": 0, + "content": "3.3 Inference Pipeline" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.84 + ], + "angle": 0, + "content": "We perform inference as shown in Figure 5. First, a local image and a global caption are inputted. We then apply a mask to the image in the direction of the" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.253 + ], + "angle": 0, + "content": "desired expansion to expand this image. And then, we generate an imaginary local caption with the LLM to fill in the local masked image. Figure 4 illustrates the process of generating an imaginary local caption. We input a local image and the instruction \"Create a short sentence outside of the given image to expand this image to the left.\" into the LLM to generate the local caption. By providing the expanding direction with the instruction, the LLM can effectively imagine the local caption which describes the scene surrounding the given local image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.253, + 0.79, + 0.481 + ], + "angle": 0, + "content": "Next, we shift the local masked image autoregressively. To expand a local image that incorporates the details of the local caption while considering the global semantic context, we use both the global and local captions as text condition. After extracting the embeddings of these captions, we concatenate the vectors. Then we input the vector into the MLP layer. By compressing the vector, we extract the textual feature from global and local captions, \\( E_{t} \\) (\\( 77 \\times 768 \\)). Additionally, to maintain visual consistency and understand the spatial information of the previously generated image, we use the CLIP image embedding of the generated global image as the visual feature, \\( E_{i} \\) (\\( 77 \\times 768 \\)). Then we create a conditioning vector, \\( W \\) (\\( 154 \\times 768 \\)) by concatenating both textual and visual features. Our model expands an image with each step conditioning on the vector, \\( W \\), with an expanded cross-attention dimension (\\( 154 \\times 768 \\)). This enables us to generate an output image by considering on the textual and visual features. Also we can arbitrarily extend the input local image in an autoregressive manner while maintaining global coherence and local consistency." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.501, + 0.367, + 0.518 + ], + "angle": 0, + "content": "4 Experiment" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.531, + 0.429, + 0.546 + ], + "angle": 0, + "content": "4.1 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.553, + 0.808, + 0.644 + ], + "angle": 0, + "content": "Implementation detail. We use 100,000 text-image pairs from the MS-COCO [15] dataset. We construct global captions on MS-COCO [15] using GPT 3.5 [3] following the Section 3.1. We fine-tune Stable Diffusion 1.5 [22] for 25 epochs with a batch size of 20, using two NVIDIA A100 GPUs. We use LLAVA 1.6 [16] to generate the local captions during the inference. We provide the training dataset examples to the supplementary material." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.645, + 0.789, + 0.81 + ], + "angle": 0, + "content": "Baselines. Since we focus on text-guided infinite image synthesis in zero-shot manner, it is challenging to select the baseline models. For example, previous models [4, 12-14], such as InfinityGAN [14] performs the unconditional image outpainting and NuWA-Infinity [13] is mainly focused on the limited categories such as natural landscapes. Also as NuWA-Infinity [13] require high resolution training dataset and do not provide the official code, we cannot compare with it. Therefore, we compare our model with the text-guided inpainting models such as SD Inpainting model [22], Blended Latent Diffusion [2] and PowerPaint [31] which can be applied to text-guided image outpainting, and for which pre-trained models are available. We use only global caption as the text condition for the baselines with the same masking setting as ours." + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.81, + 0.789, + 0.842 + ], + "angle": 0, + "content": "Evaluation Datasets. To evaluate the text-guided image outpainting performance, we utilize image captioning datasets, MS-COCO [15], Flickr 8k [10] and" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.145, + 0.788, + 0.185 + ], + "angle": 0, + "content": "Table 1: Quantitative evaluations with baselines. \\(\\times 4\\) corresponds to the image being expanded four times, and \\(\\times 8\\) corresponds to the image being expanded eight times." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.185, + 0.784, + 0.268 + ], + "angle": 0, + "content": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SD Inp [22]14.3127.4111.0328.3714.5327.628.5527.416.2528.378.8827.62
BLD [2]11.8827.7310.7828.8212.7927.966.3927.736.8628.828.1127.96
PP [31]12.9127.429.7528.379.8827.637.3727.426.0128.377.1527.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.275, + 0.787, + 0.365 + ], + "angle": 0, + "content": "UIUC Pascal [21], which are text-image paired datasets with various context. We randomly use 1,000 text-image pair samples for our evaluation on each datasets. We divided dataset into four equal parts, each comprising \\(25\\%\\) of the data, and applied masking as shown in Figure 3: top, bottom, left, and right. To generate a global caption, we use GPT-3.5 [3] based on the annotated caption, as described in Section 3.1." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.365, + 0.787, + 0.442 + ], + "angle": 0, + "content": "Evaluation Metrics. We compare our model with the baselines using CLIP-SIM [20] (average CLIP similarity between entire expanded image and global caption), and Inception score (IS) [23] as evaluation metrics. We are unable to use FID and KID evaluation metrics because we do not have the ground truth images for the extended images." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.46, + 0.427, + 0.475 + ], + "angle": 0, + "content": "4.2 Quantitative Result" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.482, + 0.787, + 0.528 + ], + "angle": 0, + "content": "To evaluate the performance of our model, we compare our model with SD Inpainting model (SD Inp) [22], Blended Latent Diffusion (BLD) [2] and PowerPaint (PP) [31] on three datasets [10, 15, 21]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.528, + 0.787, + 0.649 + ], + "angle": 0, + "content": "Image Extension \\(\\times 4\\) experiment. We expand the image four times, and the resolution of the expanded image is \\(1536 \\times 512\\) or \\(512 \\times 1536\\). As shown in Table 1, our model outperforms the baselines [2,22,31] in terms of IS [23] and CLIPSIM [20]. Since our model expands an image conditioned on a local caption generated by LLM, which represents the details within a global caption, the expanded image is faithful to the global caption while preserving its contextual coherence. However, the baseline models repetitively expand images and do not contain the rich context beyond the global caption." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.649, + 0.787, + 0.74 + ], + "angle": 0, + "content": "Image Extension \\(\\times 8\\) experiment. We expand the image eight times, and the resolution of the expanded image is \\(2560 \\times 512\\) or \\(512 \\times 2560\\). As shown in Table 1, our model shows better performance than the baseline models in IS [23] and CLIPSIM [20]. These results show that our model can maintain visual quality and global coherence while generating images with a more diverse context as it extends more images." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.758, + 0.431, + 0.774 + ], + "angle": 0, + "content": "4.3 Qualitative Analysis" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.787, + 0.841 + ], + "angle": 0, + "content": "We qualitatively analyze the generated results of our model and baselines, specifically focusing on the aspects, \"text matching\", \"image quality\", and \"global coherence\". Also we provide more generated samples with larger resolutions in the supplementary material." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.285, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "image", + "bbox": [ + 0.221, + 0.147, + 0.486, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.149, + 0.768, + 0.581 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.594, + 0.788, + 0.637 + ], + "angle": 0, + "content": "Fig. 6: Comparison of generated image results. We expand the image eight times. The expanded image has a resolution of \\(512 \\times 2560\\) or \\(2560 \\times 512\\). The red box is the given local image. We provide more samples in the supplementary material." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.643, + 0.789, + 0.794 + ], + "angle": 0, + "content": "(i) Text Matching. It is important for the expanded image to follow the context of the given global caption without repetitive patterns. According to Figure 6 (e), our model generates objects that match the content of the global caption, such as \"traffic lights\", \"wires\" and \"building\" in a harmonious manner. It extends into one consistent image that matches the global caption. However, the baselines either reflect only partial objects mentioned in the global caption or fail to match the expanded overall image with the global caption by generating repetitive images. These results show that our model can generate an expanded image maintaining global visual consistency while successfully capturing the textual context of the global caption, compared to our baselines." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.795, + 0.789, + 0.84 + ], + "angle": 0, + "content": "(ii) Image Quality. As shown in Figure 6, when expanding the image, our model shows the ability to generate clear objects in the intended direction of expansion. In contrast, the baselines [2, 22, 31] often generate blurred or indis" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.643, + 0.789, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.201 + ], + "angle": 0, + "content": "Table 2: Human evaluation with baselines. Each cell lists the winning percentage of our model versus baselines. TM is \"text matching\". IQ is \"image quality\". GC is \"global coherence\". We report only our winning percentages and omit LOSS and TIE due to space." + }, + { + "type": "table", + "bbox": [ + 0.306, + 0.212, + 0.691, + 0.352 + ], + "angle": 0, + "content": "
MethodExpand × 4
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]65.0071.2075.4063.0063.4075.2063.4062.2074.20
BLD [2]71.6073.0078.4071.4070.8077.0073.2069.8076.40
PP [31]71.2074.4075.0078.1073.9073.0073.8068.0070.20
MethodExpand × 8
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]70.4075.2077.8069.2069.4078.4068.2068.8076.20
BLD [2]74.6077.0080.2076.1077.3080.9075.9073.4079.10
PP [31]76.4076.2074.0078.4075.0072.0075.8076.2075.20
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.371, + 0.788, + 0.463 + ], + "angle": 0, + "content": "tinct objects. For instance, as depicted in Figure 6 (a), the image expanded by SD Inp [22] shows variations in the human form with each expansion, and the shapes of objects are not clear. Also, in the case of BLD [2], the objects of expanded image have distinct colors, but shapes such as bicycles and human in the image remain indistinct. These results show that our model exhibits better image quality compared to existing models when expanding images." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.463, + 0.788, + 0.63 + ], + "angle": 0, + "content": "(iii) Global Coherence. When expanding images, it is crucial to maintain the overall visual consistency of the entire image and avoid the repetitive patterns. According to Figure 6, our model expands the images exhibiting overall harmony while encompassing a variety of content. However, in the case of the baselines, repetitive patterns are present, and it fails to maintain the overall positioning or global consistency of the image. In the Figure 6 (d), our model maintains overall harmony and generates objects reflecting the expansion of the image. However, the baselines repetitively generate \"tennis players\" or \"audiences\" without maintaining the positioning or global consistency of the expanded image. These results demonstrate that our model better reflects global consistency and overall harmony compared to the baselines when expanding images." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.641, + 0.419, + 0.655 + ], + "angle": 0, + "content": "4.4 Human Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.673, + 0.788, + 0.779 + ], + "angle": 0, + "content": "Because the evaluation metrics may not perfectly measure the performance of our model, we conduct a human evaluation on Amazon Mechanical Turk (AMT). For human evaluation, we randomly sample 100 generated images from each of MS-COCO [15], Flickr 8k [10], and Pascal [21] test sets, in total 300 samples. We conduct three surveys with 5 participants to compare our model with the baselines in the aspect of the text matching (TM), image quality (IQ) and global coherence (GC)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Image Extension \\(\\times 4\\) experiment. Table 2 shows the results of human evaluation on image expansion \\(\\times 4\\). participants significantly preferred our model in terms of text matching and image quality. From a global coherence aspect, our model outperformed the baselines by a large margin. These results demonstrate" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.187 + ], + "angle": 0, + "content": "Table 3: Quantitative evaluations with ablation models. \\(\\times 4\\) corresponds to the image being expanded four times, and \\(\\times 8\\) corresponds to the image being expanded eight times." + }, + { + "type": "table", + "bbox": [ + 0.246, + 0.199, + 0.75, + 0.291 + ], + "angle": 0, + "content": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
w/o All14.6727.4010.9028.3710.6627.628.3727.426.0428.377.1427.62
w/o CLIP14.2627.5310.8028.7013.5527.748.0327.537.0628.708.3727.74
w/o LLM14.8327.4310.4428.3913.8227.639.0427.436.5928.398.8427.63
w/o GC15.5227.4211.0228.3710.5127.629.4727.426.5028.377.2727.62
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
" + }, + { + "type": "table_caption", + "bbox": [ + 0.215, + 0.299, + 0.787, + 0.327 + ], + "angle": 0, + "content": "Table 4: Quantitative evaluations with baselines with the LLM. We compare with baselines with local captions generated by the LLM instead of global captions." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.338, + 0.778, + 0.421 + ], + "angle": 0, + "content": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SDInp w/ LLM [22]13.7427.7011.0128.7713.6827.888.5927.707.1928.778.7927.88
BLD w/ LLM [2]15.7227.418.8328.6110.0627.649.4727.414.9928.616.7527.64
PP w/ LLM [31]12.6527.428.7028.378.5027.637.4727.424.9828.375.6627.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.443, + 0.785, + 0.472 + ], + "angle": 0, + "content": "that our model reflects text alignment, image quality and visual consistency much better than the baselines." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.474, + 0.788, + 0.565 + ], + "angle": 0, + "content": "Image Extension \\(\\times 8\\) experiment. Table 2 shows the results of human evaluation on image expansion \\(\\times 8\\): similar to the human evaluation of image extension \\(\\times 4\\), participants significantly preferred our model by a substantial margin. Furthermore, the number of participants who preferred our model was higher in extension \\(\\times 8\\) than in extension \\(\\times 4\\). These results indicate that as images are expanded, our model show better performance than the baseline in all aspects." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.591, + 0.391, + 0.607 + ], + "angle": 0, + "content": "4.5 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.62, + 0.788, + 0.682 + ], + "angle": 0, + "content": "To explore the impact of the proposed components, we conduct an ablation study with different models. Also we provide the human evaluation results in the supplementary material, which show that our model is preferred than ablated models. All experimental settings are the same as in Section 4.1 and Section 4.4." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.707, + 0.789, + 0.858 + ], + "angle": 0, + "content": "Effect of the LLM guidance and CLIP visual feature. To see the effect of the LLM guidance and CLIP visual feature, we compare our model with the w/o all model which generates an image with only a global caption. In Figure 7, the w/o all model simply reflects the keywords of the global caption, while failing to maintain global consistency and diverse context. This indicates that the w/o all model expands an image repetitively that depicts the same content without considering the overall structure. As shown in Table 3, our model outperforms the w/o all model in both IS [23] and CLIPSIM [20]. This indicates that our model can expand image better than the w/o all model in aspect of image quality and text faithfulness." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.127 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.146, + 0.496, + 0.587 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.146, + 0.785, + 0.587 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.599, + 0.787, + 0.642 + ], + "angle": 0, + "content": "Fig. 7: Comparison of generated image results between our ablation models. We expand the image eight times. The expanded image has a resolution of \\(512 \\times 2560\\) or \\(2560 \\times 512\\). The red box is the given local image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.674, + 0.788, + 0.856 + ], + "angle": 0, + "content": "Effect of the local caption with LLM guidance. We compare our model with the w/o LLM model which generates an image with a global caption and the CLIP visual feature. In Figure 7, the w/o LLM model fails to incorporate content beyond the global caption since it is conditioned only on the global caption as a textual condition. Also, the extended image does not appear as a single image but rather as a collage of the images. For example, in Figure 7 (d), our model expands the image by imagining the full view of the \"baseball stadium with spectators\" whereas the w/o LLM model extends the image by repeating the \"baseball game\" image. In Table 3, our model outperforms the w/o LLM model in both IS [23] and CLIPSIM [20]. This shows that our model can expand image with better quality and text faithfulness comparing to the w/o LLM model." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.144, + 0.505, + 0.296 + ], + "angle": 0, + "content": "Table 5: Quantitative evaluations with different architectures on MS-COCO dataset. The All in MLP model gets all conditions through cross-attention using a compressed vector by the MLP \\((77\\times 768)\\). The All in cross-attention model gets all conditions directly through cross-attention \\((231\\times 768)\\). Our model gets the textual condition, a vector compressed by the MLP, and the visual condition through cross-attention \\((154\\times 768)\\)." + }, + { + "type": "table", + "bbox": [ + 0.218, + 0.308, + 0.499, + 0.365 + ], + "angle": 0, + "content": "
Expand × 4Expand × 8
ISCLIPISCLIP
All in MLP15.5727.519.1127.51
All in cross attention15.0227.429.7527.42
Ours16.0527.949.9727.94
" + }, + { + "type": "image", + "bbox": [ + 0.526, + 0.146, + 0.779, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.521, + 0.352, + 0.782, + 0.394 + ], + "angle": 0, + "content": "Fig. 8: Qualitative evaluations with different architectures The red box is the given local image." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.419, + 0.789, + 0.601 + ], + "angle": 0, + "content": "Effect of the CLIP visual feature. We compare our model with the w/o CLIP model which generates an image with a global caption and a local caption generated with the LLM. In Figure 7, comparing with our model, the w/o CLIP model often generates images with slightly lower image quality and global consistency, as it does not consider the visual feature of the overall expanded image. Figure 7 shows that the w/o CLIP model is unable to enhance the image while maintaining visual coherence. In Table 3, our model outperforms the w/o CLIP model in terms of the IS. This demonstrates that the CLIP visual feature helps the model to generate an image with better image quality. Also for CLIPSIM [20], even though the w/o CLIP model is conditioned on both global and local captions, our model generates an image that closely matches with the global caption." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.605, + 0.789, + 0.743 + ], + "angle": 0, + "content": "Effect of the global caption. We compare our model with the w/o GC model which generates an image with a local caption generated with the LLM and CLIP visual feature. Figure 7 shows that, in comparison to our model, the w/o GC model generates images that do not maintain global consistency well. Also, since it does not consider the global context of the expanded image, the expanded images fail to maintain overall harmony. In Table 3, our model outperforms the w/o GC model in terms of IS and CLIPSIM. This demonstrates that the our model can generate images that maintain global consistency by effectively reflecting the global caption." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.789, + 0.84 + ], + "angle": 0, + "content": "Effect of mask ratio. To explore various masking behaviors, we train our model on the dataset with a masking ratio of 3:1. As shown in Figure 8 (c), we found that although we can generate more content at once, it becomes more challenging to maintain global consistency when the provided(unmasked) input content gets smaller. This result demonstrates that our mask ratio is effective." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.239 + ], + "angle": 0, + "content": "Effect of LLM guidance for baselines. Our proposed method can effectively expand an image using both the LLM and the diffusion model. To explore its effectiveness, we compare our model with the baselines using local captions generated by the LLM instead of global captions. Table 4 shows that our model outperforms the baselines with the LLM. These results demonstrate the effectiveness of our architecture for this task, enhanced by the guidance of the LLM." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.271, + 0.573, + 0.288 + ], + "angle": 0, + "content": "4.6 Exploring Other Model Architectures" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.307, + 0.788, + 0.412 + ], + "angle": 0, + "content": "We explore the effect of our model architecture by comparing with two alternative model architectures: 1) In the all-in MLP model, we compress the global caption, local caption and CLIP visual feature by the MLP layer, as a compressed vector \\((77 \\times 768)\\) then the model generates an image conditioned on the vector. 2) In the all-in cross attention model, we concatenate the global caption, local caption and CLIP visual feature \\((231 \\times 768)\\) then the model generates an image conditioned on the concatenated vector through the expanded U-Net." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.415, + 0.788, + 0.505 + ], + "angle": 0, + "content": "In Figure 8 (a), the all-in MLP model produces images with blurred edges and indistinct objects, likely due to difficulty in representing both textual and visual features. Figure 8 (b) shows the all-in cross-attention model generating repetitive \"berry\" images, possibly influenced by textual content. In Figure 8 (c), our model achieves semantic and visual consistency with both global and local captions." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.507, + 0.788, + 0.553 + ], + "angle": 0, + "content": "In Table 5, our model performs better than the all-in MLP and all-in cross-attention model in both IS [23] and CLIPSIM [20]. This shows that our model architecture can reflect the content of text and visual features effectively." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.587, + 0.51, + 0.604 + ], + "angle": 0, + "content": "5 Conclusion and Limitation" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.629, + 0.788, + 0.841 + ], + "angle": 0, + "content": "In this work, we propose a novel zero-shot text-guided image outpainting model by addressing the two main challenges: 1) the lack of high-resolution text-image paired datasets that have rich context; 2) preserving global coherence and understanding the context. In contrast to prior research, which generates images in limited categories, we leverage the LLMs to imagine the outside scene of the given image. During inference, we utilize LLMs to generate imaginary prompts to expand images. This allows us to expand the image to arbitrary size with diverse contexts. Additionally, by conditioning on the visual context, we can maintain global consistency and spatial local context. The experimental results demonstrate that our model can extend images arbitrarily in a zero-shot manner, and it offers promising opportunities for text-guided image outpainting approaches. Our model has a limitation as it relies on a pre-trained text-to-image model, but the generated images can contain rich visual contents. For future work, we will expand to image outpainting through stories or other modalities, such as sound." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.403, + 0.163 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.178, + 0.788, + 0.284 + ], + "angle": 0, + "content": "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00608, Artificial intelligence research about multi-modal interactions for empathetic conversations with humans & No.RS-2020-II201336, Artificial Intelligence graduate school support(UNIST)) and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00219959)." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.308, + 0.323, + 0.323 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.34, + 0.787, + 0.395 + ], + "angle": 0, + "content": "1. Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al.: Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems 35, 23716-23736 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.396, + 0.787, + 0.437 + ], + "angle": 0, + "content": "2. Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18208-18218 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.438, + 0.787, + 0.479 + ], + "angle": 0, + "content": "3. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.48, + 0.787, + 0.521 + ], + "angle": 0, + "content": "4. Cheng, Y.C., Lin, C.H., Lee, H.Y., Ren, J., Tulyakov, S., Yang, M.H.: Inout: Diverse image outpainting via gan inversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11431-11440 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.521, + 0.787, + 0.548 + ], + "angle": 0, + "content": "5. Demir, U., Unal, G.: Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:1803.07422 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.549, + 0.787, + 0.59 + ], + "angle": 0, + "content": "6. Ding, Z., Zhang, M., Wu, J., Tu, Z.: Patched denoising diffusion models for high-resolution image synthesis. In: The Twelfth International Conference on Learning Representations (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.591, + 0.787, + 0.632 + ], + "angle": 0, + "content": "7. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: Proceedings of the seventh IEEE international conference on computer vision. vol. 2, pp. 1033-1038. IEEE (1999)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.633, + 0.787, + 0.674 + ], + "angle": 0, + "content": "8. Esser, P., Rombach, R., Blattmann, A., Ommer, B.: Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in neural information processing systems 34, 3518-3532 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.675, + 0.787, + 0.729 + ], + "angle": 0, + "content": "9. Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.: From images to textual prompts: Zero-shot visual question answering with frozen large language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10867-10877 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.22, + 0.73, + 0.787, + 0.771 + ], + "angle": 0, + "content": "10. Hodosh, M., Young, P., Hockenmaier, J.: Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research 47, 853-899 (2013)" + }, + { + "type": "ref_text", + "bbox": [ + 0.22, + 0.772, + 0.787, + 0.799 + ], + "angle": 0, + "content": "11. Kopf, J., Kienzle, W., Drucker, S., Kang, S.B.: Quality prediction for image completion. ACM Transactions on Graphics (ToG) 31(6), 1-8 (2012)" + }, + { + "type": "ref_text", + "bbox": [ + 0.22, + 0.8, + 0.787, + 0.84 + ], + "angle": 0, + "content": "12. Li, Z., Wang, Q., Snavely, N., Kanazawa, A.: Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In: European Conference on Computer Vision. pp. 515-534. Springer (2022)" + }, + { + "type": "list", + "bbox": [ + 0.22, + 0.34, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.351, + 0.128 + ], + "angle": 0, + "content": "Kwon et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.787, + 0.204 + ], + "angle": 0, + "content": "13. Liang, J., Wu, C., Hu, X., Gan, Z., Wang, J., Wang, L., Liu, Z., Fang, Y., Duan, N.: Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. Advances in Neural Information Processing Systems 35, 15420-15432 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.205, + 0.787, + 0.233 + ], + "angle": 0, + "content": "14. Lin, C.H., Lee, H.Y., Cheng, Y.C., Tulyakov, S., Yang, M.H.: Infinitygan: Towards infinite-pixel image synthesis. arXiv preprint arXiv:2104.03963 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.234, + 0.787, + 0.289 + ], + "angle": 0, + "content": "15. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. pp. 740-755. Springer (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.29, + 0.787, + 0.318 + ], + "angle": 0, + "content": "16. Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. Advances in neural information processing systems 36 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.319, + 0.787, + 0.36 + ], + "angle": 0, + "content": "17. Liu, H., Wan, Z., Huang, W., Song, Y., Han, X., Liao, J.: Pd-gan: Probabilistic diverse gan for image inpainting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 9371-9381 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.361, + 0.787, + 0.403 + ], + "angle": 0, + "content": "18. Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.404, + 0.787, + 0.445 + ], + "angle": 0, + "content": "19. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.446, + 0.787, + 0.502 + ], + "angle": 0, + "content": "20. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.503, + 0.787, + 0.559 + ], + "angle": 0, + "content": "21. Rashtchian, C., Young, P., Hodosh, M., Hockenmaier, J.: Collecting image annotations using amazon's mechanical turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's Mechanical Turk. pp. 139-147 (2010)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.56, + 0.787, + 0.601 + ], + "angle": 0, + "content": "22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10684-10695 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.602, + 0.787, + 0.643 + ], + "angle": 0, + "content": "23. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. Advances in neural information processing systems 29 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.644, + 0.787, + 0.699 + ], + "angle": 0, + "content": "24. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278-25294 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.701, + 0.787, + 0.756 + ], + "angle": 0, + "content": "25. Sivic, J., Kaneva, B., Torralba, A., Avidan, S., Freeman, W.T.: Creating and exploring a large photorealistic virtual space. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. pp. 1-8. IEEE (2008)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.757, + 0.787, + 0.841 + ], + "angle": 0, + "content": "26. Tsimpoukelli, M., Menick, J.L., Cabi, S., Eslami, S.M.A., Vinyals, O., Hill, F.: Multimodal few-shot learning with frozen language models. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems. vol. 34, pp. 200-212. Curran Associates, Inc. (2021), https://proceedings.neurips.cc/paper_files/paper/2021/file/01b7575c38dac42f3cbf7d500438b875-Paper.pdf" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.787, + 0.841 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.284, + 0.115, + 0.733, + 0.129 + ], + "angle": 0, + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "27. Wang, M., Lai, Y.K., Liang, Y., Martin, R.R., Hu, S.M.: Biggerpicture: data-driven image extrapolation using graph matching. ACM Transactions on Graphics 33(6) (2014)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.19, + 0.788, + 0.231 + ], + "angle": 0, + "content": "28. Yang, Z., Gan, Z., Wang, J., Hu, X., Lu, Y., Liu, Z., Wang, L.: An empirical study of gpt-3 for few-shot knowledge-based vqa. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 3081-3089 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.231, + 0.788, + 0.272 + ], + "angle": 0, + "content": "29. Yildirim, A.B., Pehlivan, H., Bilecen, B.B., Dundar, A.: Diverse inpainting and editing with gan inversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 23120-23130 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.273, + 0.788, + 0.314 + ], + "angle": 0, + "content": "30. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1452-1464 (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.315, + 0.788, + 0.356 + ], + "angle": 0, + "content": "31. Zhuang, J., Zeng, Y., Liu, W., Yuan, C., Chen, K.: A task is worth one word: Learning with task prompts for high-quality versatile image inpainting. arXiv preprint arXiv:2312.03594 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.788, + 0.356 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_origin.pdf b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..616c84fbca18be2430ea83c109488d74f5136f2a --- /dev/null +++ b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/b7f3f07b-6122-4084-adc4-821e20de6967_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e148d85e62258155b9c622ff90fead12529bc6f724b28db5d8fff63ee0326251 +size 4310535 diff --git a/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/full.md b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d1c3366c93a5bb5856159642fa01bcc2885a1528 --- /dev/null +++ b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/full.md @@ -0,0 +1,240 @@ +# Zero-shot Text-guided Infinite Image Synthesis with LLM guidance + +Soyeong Kwon*, Taegyeong Lee*, and Taehwan Kim + +Artificial Intelligence Graduate School, UNIST {soyoung17, taegyeonglee, taehwankim}@unist.ac.kr + +Abstract. Text-guided image editing and generation methods have diverse real-world applications. However, text-guided infinite image synthesis faces several challenges. First, there is a lack of text-image paired datasets with high-resolution and contextual diversity. Second, expanding images based on text requires global coherence and rich local context understanding. Previous studies have mainly focused on limited categories, such as natural landscapes, and also required to train on high-resolution images with paired text. To address these challenges, we propose a novel approach utilizing Large Language Models (LLMs) for both global coherence and local context understanding, without any high-resolution text-image paired training dataset. We train the diffusion model to expand an image conditioned on global and local captions generated from the LLM and visual feature. At the inference stage, given an image and a global caption, we use the LLM to generate a next local caption to expand the input image. Then, we expand the image using the global caption, generated local caption and the visual feature to consider global consistency and spatial local context. In experiments, our model outperforms the baselines both quantitatively and qualitatively. Furthermore, our model demonstrates the capability of text-guided arbitrary-sized image generation in zero-shot manner with LLM guidance. + +Keywords: Image outpainting $\cdot$ Large language models (LLMs) $\cdot$ Diffusion models + +# 1 Introduction + +Recently the field of image generation has witnessed a significant advancement in synthesizing high-resolution images from text inputs. However, the existing studies [6,13,14,19] face difficulties in generating arbitrary-size image from text with diverse context because of the following challenges. Firstly, there is a lack of high-resolution text-image paired datasets with diverse contexts. Several high-resolution images [24] may not include rich context since most of them are online shopping product photos or individual portraits. Secondly, it is not just about repetitive expansion; it is essential to expand image depicting rich content based on given text description, while maintaining visual consistency [14]. Most prior + +research [4,13,14] has focused on datasets [4,30] within limited categories, such as natural landscapes. Nevertheless, in the real world, it is desirable to depict the detailed surroundings beyond a given image, guided by textual descriptions, while ensuring visual consistency with the overall context. Therefore, unlike prior image outpainting models [4,7,11-14,25] that focus on limited datasets or unconditional image outpainting, we address this issue in a zero-shot manner by shifting the image autoregressively based on diverse contexts utilizing Large Language Models (LLMs). + +Recent research [1,9,26,28] has demonstrated that LLMs can perform multimodal tasks, while understanding the visual content as text descriptions. Furthermore, as illustrated in Figure 1, we empirically find that LLMs are able to describe (and thus imagine) the scene beyond the image in text, using only the image captions. This shows that, with the LLMs, image captioning datasets can encompass diverse contexts extending beyond its resolution. + +By leveraging the capabilities of the LLMs, we propose a novel approach that can expand an image to arbitrary size without the need for high-resolution, text-image paired datasets. Our model leverages the LLMs to incorporate global contextual information and uses a diffusion model to generate high-quality and coherent images across various contexts. + +To address the lack of high-resolution text-image paired datasets with rich contexts, we utilize the LLMs to generate the captions that describe scenes beyond the image from the existing datasets [10, 15, 21]. We take a two-step process. As depicted in Figure 1 (a), first, we generate imaginary local captions outside of the image from the annotated caption of existing text-image paired datasets. Each of the generated captions describes details about individual unfolding scenes. Next, as shown in Figure 1 (b), we summarize the annotated caption and the generated local captions to create a global caption that describes the surroundings of the image for global and local context consistency. + +The global image caption describes the entire image beyond the local image, while the local captions provide semantic details for filling in the local masked image. We input these captions into our proposed diffusion model [22] as a textual condition to fill in the local masked image while maintaining the global context consistency as illustrated in Figure 2. + +In order to expand images guided by text while considering both global and local contexts, as illustrated in Figure 2, we train our model using global and local captions as textual conditions and CLIP [20] visual features as visual condition, with the local masked image serving as input. We make four local masked images by masking the top, bottom, left, and right sections. During inference, we expand the image gradually, by shifting patch by patch with LLM guidance. We input a generated local image into the LLM and it generates a next local caption in an autoregressive manner for expanding the image. + +Experimental results show that our model outperforms the baselines, demonstrating the ability to arbitrarily expand images in a zero-shot manner with text and generate realistic high-resolution images with rich context. + +In summary, our contributions are as follows: + +- To the best of our knowledge, we are first to propose zero-shot text-guided infinite image synthesis without training on high resolution image. We introduce a novel approach with LLM guidance for zero-shot text-guided image outpainting. + +- We can expand images preserving visual consistency by shifting local masked images in an autoregressive manner. Additionally, we can generate arbitrary-sized images that incorporate diverse contexts with global consistency by conditioning on the global caption and the local caption generated with LLM effectively. + +- In experimental results, our model outperforms baselines in both quantitative and qualitative evaluations. These results show the potential of our model for real-world applications. + +# 2 Related Work + +Image Inpainting. Text-guided image inpainting, which involves filling in a portion of an image based on input text, is closely related to text-guided image outpainting [4]. Existing image inpainting methods [2, 5, 17, 18, 22, 29] include models based on GANs and diffusion-based methods. Recently, various works [2, 8, 18, 22] have focused on enhancing inpainting capabilities across general domains with diffusion models. Stable Diffusion Inpainting [22], Blended-Latent Diffusion [2] and PowerPaint [31] involve taking an image and a mask as input and then filling in the image based on the text. These studies effectively edit the masked portions of given images from text, understanding the content well. + +Image Outpainting. There are various studies [4, 7, 11, 14, 25, 27] aimed at infinitely expanding images. InfinityGAN [14], a GAN-based model, proposes a method for generating arbitrarily sized images unconditionally. This approach is trained on landscape image dataset aiming to capture both local and global consistency while generate realistic arbitrarily sized images without repetitive patterns. Additionally, InOut [4], which uses GAN inversion for image outpainting, avoids the need of sequential outpainting. While previous models [4, 12-14] have attempted to address the challenging task of image outpainting, the lack of high-resolution text-image paired dataset still leads these methods to focus on limited categories, such as natural landscapes. + +Text-guided Image Outpainting. The task of arbitrarily extending images from text is more challenging than unconditional image outpainting due to the scarcity of datasets and the difficulty of maintaining global and local consistency. Nuwa-Infinity [13] successfully performs text-guided image outpainting in an autoregressive manner. However, due to the lack of high-resolution datasets containing rich content, Nuwa-Infinity, like previous studies [4, 12, 14], performs text-guided image outpainting on limited datasets [4, 30] such as nature landscapes. To the best of our knowledge, we are the first to arbitrarily expand images from general text using LLM and diffusion model in a zero-shot manner. + +![](images/9678735c1a6c9b7a4afc25f2c1dfb9773f96a111562880d55a9e52bd58e01cc6.jpg) +Fig. 1: Global caption generation with LLM for training. To address the lack of text-image paired datasets with high resolution images that have rich context, we generate our global caption from local image captions using the LLM. + +![](images/37b2566e8eea584991ba74b5310581449fc6ecab2caa0102683f05dac350b78f.jpg) +Fig. 2: Model architecture. We fine-tune the diffusion model [22] using local masked image as input, conditioned on the $W$ vector. Green boxes are trainable networks. Blue boxes are frozen networks. + +# 3 Method + +In the training stage, we train our model conditioned on a global caption, local caption, and visual features. In the inference stage, we expand the given image conditioned on the global caption, generated local caption and the visual feature. Through this approach, our model is able to perform the text-guided image outpainting task without high-resolution text-image paired datasets. + +# 3.1 Global Caption Generation for Training + +To train the model without a high-resolution text-image paired dataset, we generate imaginary global captions describing the expanded image based on the local captions using the LLM in training step. We consider a $512 \times 512$ resolution image as a local image, and an annotated caption of the image as a local caption. We generate a global caption that depicts diverse contexts from the annotated caption by leveraging the LLM. To generate a global caption, we follow two steps. Firstly, using an annotated caption as a local caption, we create imaginary local captions that describe the surroundings of the given image by using the LLM. As seen in Figure 1, in the stage (a), we input an annotated caption, "A boy and a girl playing on the beach.", to the LLM with the instruction, "Imagine caption for what happen outside of these caption without sound". Then the LLM generates several local captions following the content of the given caption, such as "A loving couple meanders along the sandy shores of the beach, basking in the serene + +![](images/dc7f78deedd0875f0bb0581dea4be1f8012fa2131f33861f7dce0963846b6cc5.jpg) +GT: Two bicycles are standing behind two people sitting on the grass near a body of water. + +![](images/be35b199f4c2d7b55118f56a674e0a3236d5e06ff7f2783bfd01273d26ad12a0.jpg) +Fig.3:Masked image generation. We mask the images in four directions: top,bottom,left,and right. +Fig. 4: Local caption generation during inference. Using the input image and the instruction, the LLM generates an imaginary local caption. + +ambiance." These generated local captions depict various local contexts within the expanded image by imagining the scene outside of the given local image. Next, in the stage (b), we create a global caption by summarizing the annotated caption and the generated local captions. Using the instruction, "Summarize the captions", we generate a global caption, "A beach scene with a couple strolling, playful children and a dog, people exploring shops, and two kids enjoying the sand." + +The global caption summarizes an annotated caption and a variety of imaginary local captions, thereby acquiring the global context of the image that is expanded from the local image. Also we empirically found that this two-step process can generate a global caption with more rich contents for the given local image by leveraging the LLM. + +# 3.2 Training Pipeline + +To expand images from general text, we fine-tune a pre-trained Stable Diffusion model [22]. As shown in Figure 3, first, we take local masked images $M_{l}$ , each masked on the top, bottom, left, and right. + +To maintain spatial information and global visual consistency of the images generated thus far, we input a generated global image $G_{i}$ to the CLIP [20] vision encoder to extract visual feature $E_{i}$ . Since there is no high-resolution image available in the training step, we use an unmasked area of the local masked image $M_{l}$ as the generated global image $G_{i}$ . Also, as shown in Figure 2 and Equation 1, we concatenate the embeddings $E_{g}$ of global caption $P_{g}$ with embeddings $E_{l}$ of local captions $P_{l}$ . Then we extract the fused textual feature by compressing the concatenated vector through a Multi-Layer Perceptron (MLP) composed of two linear layers. As we fine-tune our model conditioned on the compressed textual feature, our model can reflect both global and local contexts when generating images. + +Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture. + +![](images/023979a56c831df7177a4566ef3346f14dae50534e1120601ea10999df8d4253.jpg) +Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture. +Fig. 5: Inference Pipeline. We expand the local image autoregressively by conditioning on the global caption, local caption generated by the LLM and the visual feature. The figure image is generated with a 16-step process $(4608 \times 512)$ . The red box is a local masked image, and the blue box is an expanded global image that is input into the CLIP image encoder. + +$$ +E _ {t} = M L P \left(E _ {g}, E _ {l}\right), \quad W = C o n c a t \left(E _ {i}, E _ {t}\right) \tag {1} +$$ + +To consider both textual and visual information effectively, we expand the cross-attention dimension of the U-Net in the pre-trained Stable Diffusion model [2]. After matching the dimension of the visual feature $E_{i}$ ( $77 \times 768$ ) with the textual feature $E_{t}$ ( $77 \times 768$ ), we concatenate them to create the $W$ vector ( $154 \times 768$ ). Then we apply it as cross-attention to the U-Net. We train our model end-to-end using MSE loss, following Stable Diffusion [22]. We provide detail in the supplementary material. + +Through this method, we train our model to expand the given local image to represent various contexts while maintaining visual consistency, by conditioning on the global caption, local caption, and visual features. + +# 3.3 Inference Pipeline + +We perform inference as shown in Figure 5. First, a local image and a global caption are inputted. We then apply a mask to the image in the direction of the + +desired expansion to expand this image. And then, we generate an imaginary local caption with the LLM to fill in the local masked image. Figure 4 illustrates the process of generating an imaginary local caption. We input a local image and the instruction "Create a short sentence outside of the given image to expand this image to the left." into the LLM to generate the local caption. By providing the expanding direction with the instruction, the LLM can effectively imagine the local caption which describes the scene surrounding the given local image. + +Next, we shift the local masked image autoregressively. To expand a local image that incorporates the details of the local caption while considering the global semantic context, we use both the global and local captions as text condition. After extracting the embeddings of these captions, we concatenate the vectors. Then we input the vector into the MLP layer. By compressing the vector, we extract the textual feature from global and local captions, $E_{t}$ ( $77 \times 768$ ). Additionally, to maintain visual consistency and understand the spatial information of the previously generated image, we use the CLIP image embedding of the generated global image as the visual feature, $E_{i}$ ( $77 \times 768$ ). Then we create a conditioning vector, $W$ ( $154 \times 768$ ) by concatenating both textual and visual features. Our model expands an image with each step conditioning on the vector, $W$ , with an expanded cross-attention dimension ( $154 \times 768$ ). This enables us to generate an output image by considering on the textual and visual features. Also we can arbitrarily extend the input local image in an autoregressive manner while maintaining global coherence and local consistency. + +# 4 Experiment + +# 4.1 Experimental Setup + +Implementation detail. We use 100,000 text-image pairs from the MS-COCO [15] dataset. We construct global captions on MS-COCO [15] using GPT 3.5 [3] following the Section 3.1. We fine-tune Stable Diffusion 1.5 [22] for 25 epochs with a batch size of 20, using two NVIDIA A100 GPUs. We use LLAVA 1.6 [16] to generate the local captions during the inference. We provide the training dataset examples to the supplementary material. + +Baselines. Since we focus on text-guided infinite image synthesis in zero-shot manner, it is challenging to select the baseline models. For example, previous models [4, 12-14], such as InfinityGAN [14] performs the unconditional image outpainting and NuWA-Infinity [13] is mainly focused on the limited categories such as natural landscapes. Also as NuWA-Infinity [13] require high resolution training dataset and do not provide the official code, we cannot compare with it. Therefore, we compare our model with the text-guided inpainting models such as SD Inpainting model [22], Blended Latent Diffusion [2] and PowerPaint [31] which can be applied to text-guided image outpainting, and for which pre-trained models are available. We use only global caption as the text condition for the baselines with the same masking setting as ours. + +Evaluation Datasets. To evaluate the text-guided image outpainting performance, we utilize image captioning datasets, MS-COCO [15], Flickr 8k [10] and + +Table 1: Quantitative evaluations with baselines. $\times 4$ corresponds to the image being expanded four times, and $\times 8$ corresponds to the image being expanded eight times. + +
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SD Inp [22]14.3127.4111.0328.3714.5327.628.5527.416.2528.378.8827.62
BLD [2]11.8827.7310.7828.8212.7927.966.3927.736.8628.828.1127.96
PP [31]12.9127.429.7528.379.8827.637.3727.426.0128.377.1527.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
+ +UIUC Pascal [21], which are text-image paired datasets with various context. We randomly use 1,000 text-image pair samples for our evaluation on each datasets. We divided dataset into four equal parts, each comprising $25\%$ of the data, and applied masking as shown in Figure 3: top, bottom, left, and right. To generate a global caption, we use GPT-3.5 [3] based on the annotated caption, as described in Section 3.1. + +Evaluation Metrics. We compare our model with the baselines using CLIP-SIM [20] (average CLIP similarity between entire expanded image and global caption), and Inception score (IS) [23] as evaluation metrics. We are unable to use FID and KID evaluation metrics because we do not have the ground truth images for the extended images. + +# 4.2 Quantitative Result + +To evaluate the performance of our model, we compare our model with SD Inpainting model (SD Inp) [22], Blended Latent Diffusion (BLD) [2] and PowerPaint (PP) [31] on three datasets [10, 15, 21]. + +Image Extension $\times 4$ experiment. We expand the image four times, and the resolution of the expanded image is $1536 \times 512$ or $512 \times 1536$ . As shown in Table 1, our model outperforms the baselines [2,22,31] in terms of IS [23] and CLIPSIM [20]. Since our model expands an image conditioned on a local caption generated by LLM, which represents the details within a global caption, the expanded image is faithful to the global caption while preserving its contextual coherence. However, the baseline models repetitively expand images and do not contain the rich context beyond the global caption. + +Image Extension $\times 8$ experiment. We expand the image eight times, and the resolution of the expanded image is $2560 \times 512$ or $512 \times 2560$ . As shown in Table 1, our model shows better performance than the baseline models in IS [23] and CLIPSIM [20]. These results show that our model can maintain visual quality and global coherence while generating images with a more diverse context as it extends more images. + +# 4.3 Qualitative Analysis + +We qualitatively analyze the generated results of our model and baselines, specifically focusing on the aspects, "text matching", "image quality", and "global coherence". Also we provide more generated samples with larger resolutions in the supplementary material. + +![](images/78414fa6bbf36391ba4dbfa8b074a2abf3de4f19f3c4834a0ea043de94ff5972.jpg) +Fig. 6: Comparison of generated image results. We expand the image eight times. The expanded image has a resolution of $512 \times 2560$ or $2560 \times 512$ . The red box is the given local image. We provide more samples in the supplementary material. + +![](images/5e1ccdf156c814f162c2817fc1d442b37a9ccd00c2dd371dbf49244a89c2a82a.jpg) + +(i) Text Matching. It is important for the expanded image to follow the context of the given global caption without repetitive patterns. According to Figure 6 (e), our model generates objects that match the content of the global caption, such as "traffic lights", "wires" and "building" in a harmonious manner. It extends into one consistent image that matches the global caption. However, the baselines either reflect only partial objects mentioned in the global caption or fail to match the expanded overall image with the global caption by generating repetitive images. These results show that our model can generate an expanded image maintaining global visual consistency while successfully capturing the textual context of the global caption, compared to our baselines. +(ii) Image Quality. As shown in Figure 6, when expanding the image, our model shows the ability to generate clear objects in the intended direction of expansion. In contrast, the baselines [2, 22, 31] often generate blurred or indis + +Table 2: Human evaluation with baselines. Each cell lists the winning percentage of our model versus baselines. TM is "text matching". IQ is "image quality". GC is "global coherence". We report only our winning percentages and omit LOSS and TIE due to space. + +
MethodExpand × 4
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]65.0071.2075.4063.0063.4075.2063.4062.2074.20
BLD [2]71.6073.0078.4071.4070.8077.0073.2069.8076.40
PP [31]71.2074.4075.0078.1073.9073.0073.8068.0070.20
MethodExpand × 8
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]70.4075.2077.8069.2069.4078.4068.2068.8076.20
BLD [2]74.6077.0080.2076.1077.3080.9075.9073.4079.10
PP [31]76.4076.2074.0078.4075.0072.0075.8076.2075.20
+ +tinct objects. For instance, as depicted in Figure 6 (a), the image expanded by SD Inp [22] shows variations in the human form with each expansion, and the shapes of objects are not clear. Also, in the case of BLD [2], the objects of expanded image have distinct colors, but shapes such as bicycles and human in the image remain indistinct. These results show that our model exhibits better image quality compared to existing models when expanding images. + +(iii) Global Coherence. When expanding images, it is crucial to maintain the overall visual consistency of the entire image and avoid the repetitive patterns. According to Figure 6, our model expands the images exhibiting overall harmony while encompassing a variety of content. However, in the case of the baselines, repetitive patterns are present, and it fails to maintain the overall positioning or global consistency of the image. In the Figure 6 (d), our model maintains overall harmony and generates objects reflecting the expansion of the image. However, the baselines repetitively generate "tennis players" or "audiences" without maintaining the positioning or global consistency of the expanded image. These results demonstrate that our model better reflects global consistency and overall harmony compared to the baselines when expanding images. + +# 4.4 Human Evaluation + +Because the evaluation metrics may not perfectly measure the performance of our model, we conduct a human evaluation on Amazon Mechanical Turk (AMT). For human evaluation, we randomly sample 100 generated images from each of MS-COCO [15], Flickr 8k [10], and Pascal [21] test sets, in total 300 samples. We conduct three surveys with 5 participants to compare our model with the baselines in the aspect of the text matching (TM), image quality (IQ) and global coherence (GC). + +Image Extension $\times 4$ experiment. Table 2 shows the results of human evaluation on image expansion $\times 4$ . participants significantly preferred our model in terms of text matching and image quality. From a global coherence aspect, our model outperformed the baselines by a large margin. These results demonstrate + +Table 3: Quantitative evaluations with ablation models. $\times 4$ corresponds to the image being expanded four times, and $\times 8$ corresponds to the image being expanded eight times. + +
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
w/o All14.6727.4010.9028.3710.6627.628.3727.426.0428.377.1427.62
w/o CLIP14.2627.5310.8028.7013.5527.748.0327.537.0628.708.3727.74
w/o LLM14.8327.4310.4428.3913.8227.639.0427.436.5928.398.8427.63
w/o GC15.5227.4211.0228.3710.5127.629.4727.426.5028.377.2727.62
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
+ +Table 4: Quantitative evaluations with baselines with the LLM. We compare with baselines with local captions generated by the LLM instead of global captions. + +
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SDInp w/ LLM [22]13.7427.7011.0128.7713.6827.888.5927.707.1928.778.7927.88
BLD w/ LLM [2]15.7227.418.8328.6110.0627.649.4727.414.9928.616.7527.64
PP w/ LLM [31]12.6527.428.7028.378.5027.637.4727.424.9828.375.6627.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
+ +that our model reflects text alignment, image quality and visual consistency much better than the baselines. + +Image Extension $\times 8$ experiment. Table 2 shows the results of human evaluation on image expansion $\times 8$ : similar to the human evaluation of image extension $\times 4$ , participants significantly preferred our model by a substantial margin. Furthermore, the number of participants who preferred our model was higher in extension $\times 8$ than in extension $\times 4$ . These results indicate that as images are expanded, our model show better performance than the baseline in all aspects. + +# 4.5 Ablation Study + +To explore the impact of the proposed components, we conduct an ablation study with different models. Also we provide the human evaluation results in the supplementary material, which show that our model is preferred than ablated models. All experimental settings are the same as in Section 4.1 and Section 4.4. + +Effect of the LLM guidance and CLIP visual feature. To see the effect of the LLM guidance and CLIP visual feature, we compare our model with the w/o all model which generates an image with only a global caption. In Figure 7, the w/o all model simply reflects the keywords of the global caption, while failing to maintain global consistency and diverse context. This indicates that the w/o all model expands an image repetitively that depicts the same content without considering the overall structure. As shown in Table 3, our model outperforms the w/o all model in both IS [23] and CLIPSIM [20]. This indicates that our model can expand image better than the w/o all model in aspect of image quality and text faithfulness. + +![](images/f177fbf09a5aecbd121fa9a122761322b5a00ded4cc682457c311f9fc66592c0.jpg) +Fig. 7: Comparison of generated image results between our ablation models. We expand the image eight times. The expanded image has a resolution of $512 \times 2560$ or $2560 \times 512$ . The red box is the given local image. + +![](images/862953f928cfd9119bacf372a474ddef5f1225f74f9d8a30fb21a17f6cca2352.jpg) + +Effect of the local caption with LLM guidance. We compare our model with the w/o LLM model which generates an image with a global caption and the CLIP visual feature. In Figure 7, the w/o LLM model fails to incorporate content beyond the global caption since it is conditioned only on the global caption as a textual condition. Also, the extended image does not appear as a single image but rather as a collage of the images. For example, in Figure 7 (d), our model expands the image by imagining the full view of the "baseball stadium with spectators" whereas the w/o LLM model extends the image by repeating the "baseball game" image. In Table 3, our model outperforms the w/o LLM model in both IS [23] and CLIPSIM [20]. This shows that our model can expand image with better quality and text faithfulness comparing to the w/o LLM model. + +Table 5: Quantitative evaluations with different architectures on MS-COCO dataset. The All in MLP model gets all conditions through cross-attention using a compressed vector by the MLP $(77\times 768)$ . The All in cross-attention model gets all conditions directly through cross-attention $(231\times 768)$ . Our model gets the textual condition, a vector compressed by the MLP, and the visual condition through cross-attention $(154\times 768)$ . + +
Expand × 4Expand × 8
ISCLIPISCLIP
All in MLP15.5727.519.1127.51
All in cross attention15.0227.429.7527.42
Ours16.0527.949.9727.94
+ +![](images/5425224a7a6ed8b72d5410c38bd1e46dee4a3dd506b9be88d1959b54e5f825b3.jpg) +Fig. 8: Qualitative evaluations with different architectures The red box is the given local image. + +Effect of the CLIP visual feature. We compare our model with the w/o CLIP model which generates an image with a global caption and a local caption generated with the LLM. In Figure 7, comparing with our model, the w/o CLIP model often generates images with slightly lower image quality and global consistency, as it does not consider the visual feature of the overall expanded image. Figure 7 shows that the w/o CLIP model is unable to enhance the image while maintaining visual coherence. In Table 3, our model outperforms the w/o CLIP model in terms of the IS. This demonstrates that the CLIP visual feature helps the model to generate an image with better image quality. Also for CLIPSIM [20], even though the w/o CLIP model is conditioned on both global and local captions, our model generates an image that closely matches with the global caption. + +Effect of the global caption. We compare our model with the w/o GC model which generates an image with a local caption generated with the LLM and CLIP visual feature. Figure 7 shows that, in comparison to our model, the w/o GC model generates images that do not maintain global consistency well. Also, since it does not consider the global context of the expanded image, the expanded images fail to maintain overall harmony. In Table 3, our model outperforms the w/o GC model in terms of IS and CLIPSIM. This demonstrates that the our model can generate images that maintain global consistency by effectively reflecting the global caption. + +Effect of mask ratio. To explore various masking behaviors, we train our model on the dataset with a masking ratio of 3:1. As shown in Figure 8 (c), we found that although we can generate more content at once, it becomes more challenging to maintain global consistency when the provided(unmasked) input content gets smaller. This result demonstrates that our mask ratio is effective. + +Effect of LLM guidance for baselines. Our proposed method can effectively expand an image using both the LLM and the diffusion model. To explore its effectiveness, we compare our model with the baselines using local captions generated by the LLM instead of global captions. Table 4 shows that our model outperforms the baselines with the LLM. These results demonstrate the effectiveness of our architecture for this task, enhanced by the guidance of the LLM. + +# 4.6 Exploring Other Model Architectures + +We explore the effect of our model architecture by comparing with two alternative model architectures: 1) In the all-in MLP model, we compress the global caption, local caption and CLIP visual feature by the MLP layer, as a compressed vector $(77 \times 768)$ then the model generates an image conditioned on the vector. 2) In the all-in cross attention model, we concatenate the global caption, local caption and CLIP visual feature $(231 \times 768)$ then the model generates an image conditioned on the concatenated vector through the expanded U-Net. + +In Figure 8 (a), the all-in MLP model produces images with blurred edges and indistinct objects, likely due to difficulty in representing both textual and visual features. Figure 8 (b) shows the all-in cross-attention model generating repetitive "berry" images, possibly influenced by textual content. In Figure 8 (c), our model achieves semantic and visual consistency with both global and local captions. + +In Table 5, our model performs better than the all-in MLP and all-in cross-attention model in both IS [23] and CLIPSIM [20]. This shows that our model architecture can reflect the content of text and visual features effectively. + +# 5 Conclusion and Limitation + +In this work, we propose a novel zero-shot text-guided image outpainting model by addressing the two main challenges: 1) the lack of high-resolution text-image paired datasets that have rich context; 2) preserving global coherence and understanding the context. In contrast to prior research, which generates images in limited categories, we leverage the LLMs to imagine the outside scene of the given image. During inference, we utilize LLMs to generate imaginary prompts to expand images. This allows us to expand the image to arbitrary size with diverse contexts. Additionally, by conditioning on the visual context, we can maintain global consistency and spatial local context. The experimental results demonstrate that our model can extend images arbitrarily in a zero-shot manner, and it offers promising opportunities for text-guided image outpainting approaches. Our model has a limitation as it relies on a pre-trained text-to-image model, but the generated images can contain rich visual contents. For future work, we will expand to image outpainting through stories or other modalities, such as sound. + +# Acknowledgements + +This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00608, Artificial intelligence research about multi-modal interactions for empathetic conversations with humans & No.RS-2020-II201336, Artificial Intelligence graduate school support(UNIST)) and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00219959). + +# References + +1. Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al.: Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems 35, 23716-23736 (2022) +2. Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18208-18218 (2022) +3. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020) +4. Cheng, Y.C., Lin, C.H., Lee, H.Y., Ren, J., Tulyakov, S., Yang, M.H.: Inout: Diverse image outpainting via gan inversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11431-11440 (2022) +5. Demir, U., Unal, G.: Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:1803.07422 (2018) +6. Ding, Z., Zhang, M., Wu, J., Tu, Z.: Patched denoising diffusion models for high-resolution image synthesis. In: The Twelfth International Conference on Learning Representations (2023) +7. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: Proceedings of the seventh IEEE international conference on computer vision. vol. 2, pp. 1033-1038. IEEE (1999) +8. Esser, P., Rombach, R., Blattmann, A., Ommer, B.: Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in neural information processing systems 34, 3518-3532 (2021) +9. Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.: From images to textual prompts: Zero-shot visual question answering with frozen large language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10867-10877 (2023) +10. Hodosh, M., Young, P., Hockenmaier, J.: Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research 47, 853-899 (2013) +11. Kopf, J., Kienzle, W., Drucker, S., Kang, S.B.: Quality prediction for image completion. ACM Transactions on Graphics (ToG) 31(6), 1-8 (2012) +12. Li, Z., Wang, Q., Snavely, N., Kanazawa, A.: Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In: European Conference on Computer Vision. pp. 515-534. Springer (2022) + +13. Liang, J., Wu, C., Hu, X., Gan, Z., Wang, J., Wang, L., Liu, Z., Fang, Y., Duan, N.: Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. Advances in Neural Information Processing Systems 35, 15420-15432 (2022) +14. Lin, C.H., Lee, H.Y., Cheng, Y.C., Tulyakov, S., Yang, M.H.: Infinitygan: Towards infinite-pixel image synthesis. arXiv preprint arXiv:2104.03963 (2021) +15. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. pp. 740-755. Springer (2014) +16. Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. Advances in neural information processing systems 36 (2024) +17. Liu, H., Wan, Z., Huang, W., Song, Y., Han, X., Liao, J.: Pd-gan: Probabilistic diverse gan for image inpainting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 9371-9381 (2021) +18. Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021) +19. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023) +20. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021) +21. Rashtchian, C., Young, P., Hodosh, M., Hockenmaier, J.: Collecting image annotations using amazon's mechanical turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's Mechanical Turk. pp. 139-147 (2010) +22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10684-10695 (2022) +23. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. Advances in neural information processing systems 29 (2016) +24. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278-25294 (2022) +25. Sivic, J., Kaneva, B., Torralba, A., Avidan, S., Freeman, W.T.: Creating and exploring a large photorealistic virtual space. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. pp. 1-8. IEEE (2008) +26. Tsimpoukelli, M., Menick, J.L., Cabi, S., Eslami, S.M.A., Vinyals, O., Hill, F.: Multimodal few-shot learning with frozen language models. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems. vol. 34, pp. 200-212. Curran Associates, Inc. (2021), https://proceedings.neurips.cc/paper_files/paper/2021/file/01b7575c38dac42f3cbf7d500438b875-Paper.pdf + +27. Wang, M., Lai, Y.K., Liang, Y., Martin, R.R., Hu, S.M.: Biggerpicture: data-driven image extrapolation using graph matching. ACM Transactions on Graphics 33(6) (2014) +28. Yang, Z., Gan, Z., Wang, J., Hu, X., Lu, Y., Liu, Z., Wang, L.: An empirical study of gpt-3 for few-shot knowledge-based vqa. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 3081-3089 (2022) +29. Yildirim, A.B., Pehlivan, H., Bilecen, B.B., Dundar, A.: Diverse inpainting and editing with gan inversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 23120-23130 (2023) +30. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1452-1464 (2017) +31. Zhuang, J., Zeng, Y., Liu, W., Yuan, C., Chen, K.: A task is worth one word: Learning with task prompts for high-quality versatile image inpainting. arXiv preprint arXiv:2312.03594 (2023) \ No newline at end of file diff --git a/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/images.zip b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1b822a3019287fadadd223cc9a7f8f7765fb3c5c --- /dev/null +++ b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc617cd8694b8671ccc15a6170b61e85c4c7ce691ea70cc5f3dc664c9ae8a566 +size 931452 diff --git a/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/layout.json b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5189af560692bcf83f9bade47d76ddff9f0b3cfe --- /dev/null +++ b/2024/Zero-shot Text-guided Infinite Image Synthesis with LLM guidance/layout.json @@ -0,0 +1,7211 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 143, + 110, + 471, + 148 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 143, + 110, + 471, + 148 + ], + "spans": [ + { + "bbox": [ + 143, + 110, + 471, + 148 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 190, + 168, + 423, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 168, + 423, + 180 + ], + "spans": [ + { + "bbox": [ + 190, + 168, + 423, + 180 + ], + "type": "text", + "content": "Soyeong Kwon*, Taegyeong Lee*, and Taehwan Kim" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 190, + 190, + 423, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 190, + 423, + 213 + ], + "spans": [ + { + "bbox": [ + 190, + 190, + 423, + 213 + ], + "type": "text", + "content": "Artificial Intelligence Graduate School, UNIST {soyoung17, taegyeonglee, taehwankim}@unist.ac.kr" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 160, + 241, + 452, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 241, + 452, + 460 + ], + "spans": [ + { + "bbox": [ + 160, + 241, + 452, + 460 + ], + "type": "text", + "content": "Abstract. Text-guided image editing and generation methods have diverse real-world applications. However, text-guided infinite image synthesis faces several challenges. First, there is a lack of text-image paired datasets with high-resolution and contextual diversity. Second, expanding images based on text requires global coherence and rich local context understanding. Previous studies have mainly focused on limited categories, such as natural landscapes, and also required to train on high-resolution images with paired text. To address these challenges, we propose a novel approach utilizing Large Language Models (LLMs) for both global coherence and local context understanding, without any high-resolution text-image paired training dataset. We train the diffusion model to expand an image conditioned on global and local captions generated from the LLM and visual feature. At the inference stage, given an image and a global caption, we use the LLM to generate a next local caption to expand the input image. Then, we expand the image using the global caption, generated local caption and the visual feature to consider global consistency and spatial local context. In experiments, our model outperforms the baselines both quantitatively and qualitatively. Furthermore, our model demonstrates the capability of text-guided arbitrary-sized image generation in zero-shot manner with LLM guidance." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "spans": [ + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "type": "text", + "content": "Keywords: Image outpainting " + }, + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "type": "text", + "content": " Large language models (LLMs) " + }, + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 471, + 452, + 493 + ], + "type": "text", + "content": " Diffusion models" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 514, + 230, + 526 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 514, + 230, + 526 + ], + "spans": [ + { + "bbox": [ + 133, + 514, + 230, + 526 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 538, + 482, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 538, + 482, + 647 + ], + "spans": [ + { + "bbox": [ + 130, + 538, + 482, + 647 + ], + "type": "text", + "content": "Recently the field of image generation has witnessed a significant advancement in synthesizing high-resolution images from text inputs. However, the existing studies [6,13,14,19] face difficulties in generating arbitrary-size image from text with diverse context because of the following challenges. Firstly, there is a lack of high-resolution text-image paired datasets with diverse contexts. Several high-resolution images [24] may not include rich context since most of them are online shopping product photos or individual portraits. Secondly, it is not just about repetitive expansion; it is essential to expand image depicting rich content based on given text description, while maintaining visual consistency [14]. Most prior" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 653, + 382, + 666 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 653, + 382, + 666 + ], + "spans": [ + { + "bbox": [ + 133, + 653, + 382, + 666 + ], + "type": "text", + "content": "* Equal contributions (alphabetically ordered by last name.)" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 212 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 212 + ], + "type": "text", + "content": "research [4,13,14] has focused on datasets [4,30] within limited categories, such as natural landscapes. Nevertheless, in the real world, it is desirable to depict the detailed surroundings beyond a given image, guided by textual descriptions, while ensuring visual consistency with the overall context. Therefore, unlike prior image outpainting models [4,7,11-14,25] that focus on limited datasets or unconditional image outpainting, we address this issue in a zero-shot manner by shifting the image autoregressively based on diverse contexts utilizing Large Language Models (LLMs)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 212, + 482, + 284 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 212, + 482, + 284 + ], + "spans": [ + { + "bbox": [ + 130, + 212, + 482, + 284 + ], + "type": "text", + "content": "Recent research [1,9,26,28] has demonstrated that LLMs can perform multimodal tasks, while understanding the visual content as text descriptions. Furthermore, as illustrated in Figure 1, we empirically find that LLMs are able to describe (and thus imagine) the scene beyond the image in text, using only the image captions. This shows that, with the LLMs, image captioning datasets can encompass diverse contexts extending beyond its resolution." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 284, + 482, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 284, + 482, + 344 + ], + "spans": [ + { + "bbox": [ + 130, + 284, + 482, + 344 + ], + "type": "text", + "content": "By leveraging the capabilities of the LLMs, we propose a novel approach that can expand an image to arbitrary size without the need for high-resolution, text-image paired datasets. Our model leverages the LLMs to incorporate global contextual information and uses a diffusion model to generate high-quality and coherent images across various contexts." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 344, + 482, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 344, + 482, + 453 + ], + "spans": [ + { + "bbox": [ + 130, + 344, + 482, + 453 + ], + "type": "text", + "content": "To address the lack of high-resolution text-image paired datasets with rich contexts, we utilize the LLMs to generate the captions that describe scenes beyond the image from the existing datasets [10, 15, 21]. We take a two-step process. As depicted in Figure 1 (a), first, we generate imaginary local captions outside of the image from the annotated caption of existing text-image paired datasets. Each of the generated captions describes details about individual unfolding scenes. Next, as shown in Figure 1 (b), we summarize the annotated caption and the generated local captions to create a global caption that describes the surroundings of the image for global and local context consistency." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 453, + 482, + 512 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 453, + 482, + 512 + ], + "spans": [ + { + "bbox": [ + 130, + 453, + 482, + 512 + ], + "type": "text", + "content": "The global image caption describes the entire image beyond the local image, while the local captions provide semantic details for filling in the local masked image. We input these captions into our proposed diffusion model [22] as a textual condition to fill in the local masked image while maintaining the global context consistency as illustrated in Figure 2." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 512, + 482, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 512, + 482, + 608 + ], + "spans": [ + { + "bbox": [ + 130, + 512, + 482, + 608 + ], + "type": "text", + "content": "In order to expand images guided by text while considering both global and local contexts, as illustrated in Figure 2, we train our model using global and local captions as textual conditions and CLIP [20] visual features as visual condition, with the local masked image serving as input. We make four local masked images by masking the top, bottom, left, and right sections. During inference, we expand the image gradually, by shifting patch by patch with LLM guidance. We input a generated local image into the LLM and it generates a next local caption in an autoregressive manner for expanding the image." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 609, + 482, + 644 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 609, + 482, + 644 + ], + "spans": [ + { + "bbox": [ + 130, + 609, + 482, + 644 + ], + "type": "text", + "content": "Experimental results show that our model outperforms the baselines, demonstrating the ability to arbitrarily expand images in a zero-shot manner with text and generate realistic high-resolution images with rich context." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 146, + 647, + 348, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 647, + 348, + 658 + ], + "spans": [ + { + "bbox": [ + 146, + 647, + 348, + 658 + ], + "type": "text", + "content": "In summary, our contributions are as follows:" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 137, + 116, + 481, + 164 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 116, + 481, + 164 + ], + "spans": [ + { + "bbox": [ + 137, + 116, + 481, + 164 + ], + "type": "text", + "content": "- To the best of our knowledge, we are first to propose zero-shot text-guided infinite image synthesis without training on high resolution image. We introduce a novel approach with LLM guidance for zero-shot text-guided image outpainting." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 166, + 481, + 225 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 166, + 481, + 225 + ], + "spans": [ + { + "bbox": [ + 138, + 166, + 481, + 225 + ], + "type": "text", + "content": "- We can expand images preserving visual consistency by shifting local masked images in an autoregressive manner. Additionally, we can generate arbitrary-sized images that incorporate diverse contexts with global consistency by conditioning on the global caption and the local caption generated with LLM effectively." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 227, + 481, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 227, + 481, + 264 + ], + "spans": [ + { + "bbox": [ + 138, + 227, + 481, + 264 + ], + "type": "text", + "content": "- In experimental results, our model outperforms baselines in both quantitative and qualitative evaluations. These results show the potential of our model for real-world applications." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 289, + 237, + 303 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 289, + 237, + 303 + ], + "spans": [ + { + "bbox": [ + 132, + 289, + 237, + 303 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 316, + 482, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 316, + 482, + 434 + ], + "spans": [ + { + "bbox": [ + 130, + 316, + 482, + 434 + ], + "type": "text", + "content": "Image Inpainting. Text-guided image inpainting, which involves filling in a portion of an image based on input text, is closely related to text-guided image outpainting [4]. Existing image inpainting methods [2, 5, 17, 18, 22, 29] include models based on GANs and diffusion-based methods. Recently, various works [2, 8, 18, 22] have focused on enhancing inpainting capabilities across general domains with diffusion models. Stable Diffusion Inpainting [22], Blended-Latent Diffusion [2] and PowerPaint [31] involve taking an image and a mask as input and then filling in the image based on the text. These studies effectively edit the masked portions of given images from text, understanding the content well." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 436, + 482, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 436, + 482, + 556 + ], + "spans": [ + { + "bbox": [ + 130, + 436, + 482, + 556 + ], + "type": "text", + "content": "Image Outpainting. There are various studies [4, 7, 11, 14, 25, 27] aimed at infinitely expanding images. InfinityGAN [14], a GAN-based model, proposes a method for generating arbitrarily sized images unconditionally. This approach is trained on landscape image dataset aiming to capture both local and global consistency while generate realistic arbitrarily sized images without repetitive patterns. Additionally, InOut [4], which uses GAN inversion for image outpainting, avoids the need of sequential outpainting. While previous models [4, 12-14] have attempted to address the challenging task of image outpainting, the lack of high-resolution text-image paired dataset still leads these methods to focus on limited categories, such as natural landscapes." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 558, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 482, + 665 + ], + "type": "text", + "content": "Text-guided Image Outpainting. The task of arbitrarily extending images from text is more challenging than unconditional image outpainting due to the scarcity of datasets and the difficulty of maintaining global and local consistency. Nuwa-Infinity [13] successfully performs text-guided image outpainting in an autoregressive manner. However, due to the lack of high-resolution datasets containing rich content, Nuwa-Infinity, like previous studies [4, 12, 14], performs text-guided image outpainting on limited datasets [4, 30] such as nature landscapes. To the best of our knowledge, we are the first to arbitrarily expand images from general text using LLM and diffusion model in a zero-shot manner." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 138, + 116, + 475, + 192 + ], + "blocks": [ + { + "bbox": [ + 138, + 116, + 475, + 192 + ], + "lines": [ + { + "bbox": [ + 138, + 116, + 475, + 192 + ], + "spans": [ + { + "bbox": [ + 138, + 116, + 475, + 192 + ], + "type": "image", + "image_path": "9678735c1a6c9b7a4afc25f2c1dfb9773f96a111562880d55a9e52bd58e01cc6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 198, + 482, + 232 + ], + "lines": [ + { + "bbox": [ + 130, + 198, + 482, + 232 + ], + "spans": [ + { + "bbox": [ + 130, + 198, + 482, + 232 + ], + "type": "text", + "content": "Fig. 1: Global caption generation with LLM for training. To address the lack of text-image paired datasets with high resolution images that have rich context, we generate our global caption from local image captions using the LLM." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 138, + 242, + 476, + 336 + ], + "blocks": [ + { + "bbox": [ + 138, + 242, + 476, + 336 + ], + "lines": [ + { + "bbox": [ + 138, + 242, + 476, + 336 + ], + "spans": [ + { + "bbox": [ + 138, + 242, + 476, + 336 + ], + "type": "image", + "image_path": "37b2566e8eea584991ba74b5310581449fc6ecab2caa0102683f05dac350b78f.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 342, + 482, + 375 + ], + "lines": [ + { + "bbox": [ + 130, + 342, + 482, + 375 + ], + "spans": [ + { + "bbox": [ + 130, + 342, + 482, + 375 + ], + "type": "text", + "content": "Fig. 2: Model architecture. We fine-tune the diffusion model [22] using local masked image as input, conditioned on the " + }, + { + "bbox": [ + 130, + 342, + 482, + 375 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 130, + 342, + 482, + 375 + ], + "type": "text", + "content": " vector. Green boxes are trainable networks. Blue boxes are frozen networks." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 397, + 202, + 410 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 397, + 202, + 410 + ], + "spans": [ + { + "bbox": [ + 131, + 397, + 202, + 410 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 413, + 482, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 413, + 482, + 474 + ], + "spans": [ + { + "bbox": [ + 130, + 413, + 482, + 474 + ], + "type": "text", + "content": "In the training stage, we train our model conditioned on a global caption, local caption, and visual features. In the inference stage, we expand the given image conditioned on the global caption, generated local caption and the visual feature. Through this approach, our model is able to perform the text-guided image outpainting task without high-resolution text-image paired datasets." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 490, + 362, + 503 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 490, + 362, + 503 + ], + "spans": [ + { + "bbox": [ + 131, + 490, + 362, + 503 + ], + "type": "text", + "content": "3.1 Global Caption Generation for Training" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "content": "To train the model without a high-resolution text-image paired dataset, we generate imaginary global captions describing the expanded image based on the local captions using the LLM in training step. We consider a " + }, + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "content": " resolution image as a local image, and an annotated caption of the image as a local caption. We generate a global caption that depicts diverse contexts from the annotated caption by leveraging the LLM. To generate a global caption, we follow two steps. Firstly, using an annotated caption as a local caption, we create imaginary local captions that describe the surroundings of the given image by using the LLM. As seen in Figure 1, in the stage (a), we input an annotated caption, \"A boy and a girl playing on the beach.\", to the LLM with the instruction, \"Imagine caption for what happen outside of these caption without sound\". Then the LLM generates several local captions following the content of the given caption, such as \"A loving couple meanders along the sandy shores of the beach, basking in the serene" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 162, + 124, + 263, + 236 + ], + "blocks": [ + { + "bbox": [ + 162, + 124, + 263, + 236 + ], + "lines": [ + { + "bbox": [ + 162, + 124, + 263, + 236 + ], + "spans": [ + { + "bbox": [ + 162, + 124, + 263, + 236 + ], + "type": "image", + "image_path": "dc7f78deedd0875f0bb0581dea4be1f8012fa2131f33861f7dce0963846b6cc5.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 161, + 237, + 267, + 262 + ], + "lines": [ + { + "bbox": [ + 161, + 237, + 267, + 262 + ], + "spans": [ + { + "bbox": [ + 161, + 237, + 267, + 262 + ], + "type": "text", + "content": "GT: Two bicycles are standing behind two people sitting on the grass near a body of water." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 323, + 114, + 476, + 245 + ], + "blocks": [ + { + "bbox": [ + 132, + 274, + 297, + 307 + ], + "lines": [ + { + "bbox": [ + 132, + 274, + 297, + 307 + ], + "spans": [ + { + "bbox": [ + 132, + 274, + 297, + 307 + ], + "type": "text", + "content": "Fig.3:Masked image generation. We mask the images in four directions: top,bottom,left,and right." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 323, + 114, + 476, + 245 + ], + "lines": [ + { + "bbox": [ + 323, + 114, + 476, + 245 + ], + "spans": [ + { + "bbox": [ + 323, + 114, + 476, + 245 + ], + "type": "image", + "image_path": "be35b199f4c2d7b55118f56a674e0a3236d5e06ff7f2783bfd01273d26ad12a0.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 315, + 258, + 480, + 301 + ], + "lines": [ + { + "bbox": [ + 315, + 258, + 480, + 301 + ], + "spans": [ + { + "bbox": [ + 315, + 258, + 480, + 301 + ], + "type": "text", + "content": "Fig. 4: Local caption generation during inference. Using the input image and the instruction, the LLM generates an imaginary local caption." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 316, + 482, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 316, + 482, + 398 + ], + "spans": [ + { + "bbox": [ + 130, + 316, + 482, + 398 + ], + "type": "text", + "content": "ambiance.\" These generated local captions depict various local contexts within the expanded image by imagining the scene outside of the given local image. Next, in the stage (b), we create a global caption by summarizing the annotated caption and the generated local captions. Using the instruction, \"Summarize the captions\", we generate a global caption, \"A beach scene with a couple strolling, playful children and a dog, people exploring shops, and two kids enjoying the sand.\"" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 399, + 482, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 399, + 482, + 460 + ], + "spans": [ + { + "bbox": [ + 130, + 399, + 482, + 460 + ], + "type": "text", + "content": "The global caption summarizes an annotated caption and a variety of imaginary local captions, thereby acquiring the global context of the image that is expanded from the local image. Also we empirically found that this two-step process can generate a global caption with more rich contents for the given local image by leveraging the LLM." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 131, + 478, + 248, + 490 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 478, + 248, + 490 + ], + "spans": [ + { + "bbox": [ + 131, + 478, + 248, + 490 + ], + "type": "text", + "content": "3.2 Training Pipeline" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 498, + 481, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 498, + 481, + 533 + ], + "spans": [ + { + "bbox": [ + 130, + 498, + 481, + 533 + ], + "type": "text", + "content": "To expand images from general text, we fine-tune a pre-trained Stable Diffusion model [22]. As shown in Figure 3, first, we take local masked images " + }, + { + "bbox": [ + 130, + 498, + 481, + 533 + ], + "type": "inline_equation", + "content": "M_{l}" + }, + { + "bbox": [ + 130, + 498, + 481, + 533 + ], + "type": "text", + "content": ", each masked on the top, bottom, left, and right." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": "To maintain spatial information and global visual consistency of the images generated thus far, we input a generated global image " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "G_{i}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": " to the CLIP [20] vision encoder to extract visual feature " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "E_{i}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": ". Since there is no high-resolution image available in the training step, we use an unmasked area of the local masked image " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "M_{l}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": " as the generated global image " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "G_{i}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": ". Also, as shown in Figure 2 and Equation 1, we concatenate the embeddings " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "E_{g}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": " of global caption " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "P_{g}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": " with embeddings " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "E_{l}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": " of local captions " + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "inline_equation", + "content": "P_{l}" + }, + { + "bbox": [ + 130, + 534, + 482, + 666 + ], + "type": "text", + "content": ". Then we extract the fused textual feature by compressing the concatenated vector through a Multi-Layer Perceptron (MLP) composed of two linear layers. As we fine-tune our model conditioned on the compressed textual feature, our model can reflect both global and local contexts when generating images." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 160, + 117, + 459, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 117, + 459, + 125 + ], + "spans": [ + { + "bbox": [ + 160, + 117, + 459, + 125 + ], + "type": "text", + "content": "Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 141, + 125, + 475, + 377 + ], + "blocks": [ + { + "bbox": [ + 141, + 125, + 475, + 377 + ], + "lines": [ + { + "bbox": [ + 141, + 125, + 475, + 377 + ], + "spans": [ + { + "bbox": [ + 141, + 125, + 475, + 377 + ], + "type": "image", + "image_path": "023979a56c831df7177a4566ef3346f14dae50534e1120601ea10999df8d4253.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 161, + 378, + 459, + 387 + ], + "lines": [ + { + "bbox": [ + 161, + 378, + 459, + 387 + ], + "spans": [ + { + "bbox": [ + 161, + 378, + 459, + 387 + ], + "type": "text", + "content": "Global caption: A sunny street scene with cyclists, diners at cafes, and traditional European architecture." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 130, + 398, + 482, + 453 + ], + "lines": [ + { + "bbox": [ + 130, + 398, + 482, + 453 + ], + "spans": [ + { + "bbox": [ + 130, + 398, + 482, + 453 + ], + "type": "text", + "content": "Fig. 5: Inference Pipeline. We expand the local image autoregressively by conditioning on the global caption, local caption generated by the LLM and the visual feature. The figure image is generated with a 16-step process " + }, + { + "bbox": [ + 130, + 398, + 482, + 453 + ], + "type": "inline_equation", + "content": "(4608 \\times 512)" + }, + { + "bbox": [ + 130, + 398, + 482, + 453 + ], + "type": "text", + "content": ". The red box is a local masked image, and the blue box is an expanded global image that is input into the CLIP image encoder." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 210, + 472, + 481, + 486 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 210, + 472, + 481, + 486 + ], + "spans": [ + { + "bbox": [ + 210, + 472, + 481, + 486 + ], + "type": "interline_equation", + "content": "E _ {t} = M L P \\left(E _ {g}, E _ {l}\\right), \\quad W = C o n c a t \\left(E _ {i}, E _ {t}\\right) \\tag {1}", + "image_path": "57bf1d42909895722c231f180ba8660ca7e2cc1aa70d119412f3b257beb53199.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "spans": [ + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": "To consider both textual and visual information effectively, we expand the cross-attention dimension of the U-Net in the pre-trained Stable Diffusion model [2]. After matching the dimension of the visual feature " + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "inline_equation", + "content": "E_{i}" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "inline_equation", + "content": "77 \\times 768" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": ") with the textual feature " + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "inline_equation", + "content": "E_{t}" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "inline_equation", + "content": "77 \\times 768" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": "), we concatenate them to create the " + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": " vector (" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "inline_equation", + "content": "154 \\times 768" + }, + { + "bbox": [ + 130, + 488, + 482, + 572 + ], + "type": "text", + "content": "). Then we apply it as cross-attention to the U-Net. We train our model end-to-end using MSE loss, following Stable Diffusion [22]. We provide detail in the supplementary material." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 573, + 482, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 573, + 482, + 608 + ], + "spans": [ + { + "bbox": [ + 130, + 573, + 482, + 608 + ], + "type": "text", + "content": "Through this method, we train our model to expand the given local image to represent various contexts while maintaining visual consistency, by conditioning on the global caption, local caption, and visual features." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 131, + 624, + 253, + 636 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 624, + 253, + 636 + ], + "spans": [ + { + "bbox": [ + 131, + 624, + 253, + 636 + ], + "type": "text", + "content": "3.3 Inference Pipeline" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 665 + ], + "type": "text", + "content": "We perform inference as shown in Figure 5. First, a local image and a global caption are inputted. We then apply a mask to the image in the direction of the" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 200 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 200 + ], + "type": "text", + "content": "desired expansion to expand this image. And then, we generate an imaginary local caption with the LLM to fill in the local masked image. Figure 4 illustrates the process of generating an imaginary local caption. We input a local image and the instruction \"Create a short sentence outside of the given image to expand this image to the left.\" into the LLM to generate the local caption. By providing the expanding direction with the instruction, the LLM can effectively imagine the local caption which describes the scene surrounding the given local image." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "spans": [ + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": "Next, we shift the local masked image autoregressively. To expand a local image that incorporates the details of the local caption while considering the global semantic context, we use both the global and local captions as text condition. After extracting the embeddings of these captions, we concatenate the vectors. Then we input the vector into the MLP layer. By compressing the vector, we extract the textual feature from global and local captions, " + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "E_{t}" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "77 \\times 768" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": "). Additionally, to maintain visual consistency and understand the spatial information of the previously generated image, we use the CLIP image embedding of the generated global image as the visual feature, " + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "E_{i}" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "77 \\times 768" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": "). Then we create a conditioning vector, " + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "154 \\times 768" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": ") by concatenating both textual and visual features. Our model expands an image with each step conditioning on the vector, " + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "W" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": ", with an expanded cross-attention dimension (" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "inline_equation", + "content": "154 \\times 768" + }, + { + "bbox": [ + 130, + 200, + 483, + 380 + ], + "type": "text", + "content": "). This enables us to generate an output image by considering on the textual and visual features. Also we can arbitrarily extend the input local image in an autoregressive manner while maintaining global coherence and local consistency." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 131, + 396, + 224, + 410 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 396, + 224, + 410 + ], + "spans": [ + { + "bbox": [ + 131, + 396, + 224, + 410 + ], + "type": "text", + "content": "4 Experiment" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 420, + 262, + 432 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 420, + 262, + 432 + ], + "spans": [ + { + "bbox": [ + 131, + 420, + 262, + 432 + ], + "type": "text", + "content": "4.1 Experimental Setup" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 437, + 494, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 437, + 494, + 510 + ], + "spans": [ + { + "bbox": [ + 130, + 437, + 494, + 510 + ], + "type": "text", + "content": "Implementation detail. We use 100,000 text-image pairs from the MS-COCO [15] dataset. We construct global captions on MS-COCO [15] using GPT 3.5 [3] following the Section 3.1. We fine-tune Stable Diffusion 1.5 [22] for 25 epochs with a batch size of 20, using two NVIDIA A100 GPUs. We use LLAVA 1.6 [16] to generate the local captions during the inference. We provide the training dataset examples to the supplementary material." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 510, + 482, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 641 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 641 + ], + "type": "text", + "content": "Baselines. Since we focus on text-guided infinite image synthesis in zero-shot manner, it is challenging to select the baseline models. For example, previous models [4, 12-14], such as InfinityGAN [14] performs the unconditional image outpainting and NuWA-Infinity [13] is mainly focused on the limited categories such as natural landscapes. Also as NuWA-Infinity [13] require high resolution training dataset and do not provide the official code, we cannot compare with it. Therefore, we compare our model with the text-guided inpainting models such as SD Inpainting model [22], Blended Latent Diffusion [2] and PowerPaint [31] which can be applied to text-guided image outpainting, and for which pre-trained models are available. We use only global caption as the text condition for the baselines with the same masking setting as ours." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 641, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 641, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 131, + 641, + 482, + 666 + ], + "type": "text", + "content": "Evaluation Datasets. To evaluate the text-guided image outpainting performance, we utilize image captioning datasets, MS-COCO [15], Flickr 8k [10] and" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 146, + 479, + 212 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "type": "text", + "content": "Table 1: Quantitative evaluations with baselines. " + }, + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "type": "text", + "content": " corresponds to the image being expanded four times, and " + }, + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 132, + 114, + 482, + 146 + ], + "type": "text", + "content": " corresponds to the image being expanded eight times." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 146, + 479, + 212 + ], + "lines": [ + { + "bbox": [ + 133, + 146, + 479, + 212 + ], + "spans": [ + { + "bbox": [ + 133, + 146, + 479, + 212 + ], + "type": "table", + "html": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SD Inp [22]14.3127.4111.0328.3714.5327.628.5527.416.2528.378.8827.62
BLD [2]11.8827.7310.7828.8212.7927.966.3927.736.8628.828.1127.96
PP [31]12.9127.429.7528.379.8827.637.3727.426.0128.377.1527.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
", + "image_path": "73ecaf89c45671c73e8f7854cc8598f2e4ae35cc7fd52f1abcdd6788bd9c8dd2.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 217, + 481, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 217, + 481, + 289 + ], + "spans": [ + { + "bbox": [ + 130, + 217, + 481, + 289 + ], + "type": "text", + "content": "UIUC Pascal [21], which are text-image paired datasets with various context. We randomly use 1,000 text-image pair samples for our evaluation on each datasets. We divided dataset into four equal parts, each comprising " + }, + { + "bbox": [ + 130, + 217, + 481, + 289 + ], + "type": "inline_equation", + "content": "25\\%" + }, + { + "bbox": [ + 130, + 217, + 481, + 289 + ], + "type": "text", + "content": " of the data, and applied masking as shown in Figure 3: top, bottom, left, and right. To generate a global caption, we use GPT-3.5 [3] based on the annotated caption, as described in Section 3.1." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 289, + 481, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 481, + 350 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 481, + 350 + ], + "type": "text", + "content": "Evaluation Metrics. We compare our model with the baselines using CLIP-SIM [20] (average CLIP similarity between entire expanded image and global caption), and Inception score (IS) [23] as evaluation metrics. We are unable to use FID and KID evaluation metrics because we do not have the ground truth images for the extended images." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 364, + 261, + 376 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 364, + 261, + 376 + ], + "spans": [ + { + "bbox": [ + 132, + 364, + 261, + 376 + ], + "type": "text", + "content": "4.2 Quantitative Result" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 381, + 481, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 381, + 481, + 418 + ], + "spans": [ + { + "bbox": [ + 130, + 381, + 481, + 418 + ], + "type": "text", + "content": "To evaluate the performance of our model, we compare our model with SD Inpainting model (SD Inp) [22], Blended Latent Diffusion (BLD) [2] and PowerPaint (PP) [31] on three datasets [10, 15, 21]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "spans": [ + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "text", + "content": "Image Extension " + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "text", + "content": " experiment. We expand the image four times, and the resolution of the expanded image is " + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "inline_equation", + "content": "1536 \\times 512" + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "inline_equation", + "content": "512 \\times 1536" + }, + { + "bbox": [ + 130, + 418, + 481, + 514 + ], + "type": "text", + "content": ". As shown in Table 1, our model outperforms the baselines [2,22,31] in terms of IS [23] and CLIPSIM [20]. Since our model expands an image conditioned on a local caption generated by LLM, which represents the details within a global caption, the expanded image is faithful to the global caption while preserving its contextual coherence. However, the baseline models repetitively expand images and do not contain the rich context beyond the global caption." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "spans": [ + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "text", + "content": "Image Extension " + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "text", + "content": " experiment. We expand the image eight times, and the resolution of the expanded image is " + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "inline_equation", + "content": "2560 \\times 512" + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "inline_equation", + "content": "512 \\times 2560" + }, + { + "bbox": [ + 130, + 514, + 481, + 586 + ], + "type": "text", + "content": ". As shown in Table 1, our model shows better performance than the baseline models in IS [23] and CLIPSIM [20]. These results show that our model can maintain visual quality and global coherence while generating images with a more diverse context as it extends more images." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 600, + 263, + 613 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 600, + 263, + 613 + ], + "spans": [ + { + "bbox": [ + 132, + 600, + 263, + 613 + ], + "type": "text", + "content": "4.3 Qualitative Analysis" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 617, + 481, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 481, + 666 + ], + "type": "text", + "content": "We qualitatively analyze the generated results of our model and baselines, specifically focusing on the aspects, \"text matching\", \"image quality\", and \"global coherence\". Also we provide more generated samples with larger resolutions in the supplementary material." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 135, + 116, + 297, + 459 + ], + "blocks": [ + { + "bbox": [ + 135, + 116, + 297, + 459 + ], + "lines": [ + { + "bbox": [ + 135, + 116, + 297, + 459 + ], + "spans": [ + { + "bbox": [ + 135, + 116, + 297, + 459 + ], + "type": "image", + "image_path": "78414fa6bbf36391ba4dbfa8b074a2abf3de4f19f3c4834a0ea043de94ff5972.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "lines": [ + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "spans": [ + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "type": "text", + "content": "Fig. 6: Comparison of generated image results. We expand the image eight times. The expanded image has a resolution of " + }, + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "type": "inline_equation", + "content": "512 \\times 2560" + }, + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "type": "inline_equation", + "content": "2560 \\times 512" + }, + { + "bbox": [ + 132, + 470, + 482, + 504 + ], + "type": "text", + "content": ". The red box is the given local image. We provide more samples in the supplementary material." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 306, + 118, + 470, + 460 + ], + "blocks": [ + { + "bbox": [ + 306, + 118, + 470, + 460 + ], + "lines": [ + { + "bbox": [ + 306, + 118, + 470, + 460 + ], + "spans": [ + { + "bbox": [ + 306, + 118, + 470, + 460 + ], + "type": "image", + "image_path": "5e1ccdf156c814f162c2817fc1d442b37a9ccd00c2dd371dbf49244a89c2a82a.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 509, + 482, + 665 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 130, + 509, + 482, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 509, + 482, + 628 + ], + "spans": [ + { + "bbox": [ + 130, + 509, + 482, + 628 + ], + "type": "text", + "content": "(i) Text Matching. It is important for the expanded image to follow the context of the given global caption without repetitive patterns. According to Figure 6 (e), our model generates objects that match the content of the global caption, such as \"traffic lights\", \"wires\" and \"building\" in a harmonious manner. It extends into one consistent image that matches the global caption. However, the baselines either reflect only partial objects mentioned in the global caption or fail to match the expanded overall image with the global caption by generating repetitive images. These results show that our model can generate an expanded image maintaining global visual consistency while successfully capturing the textual context of the global caption, compared to our baselines." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 629, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 629, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 629, + 482, + 665 + ], + "type": "text", + "content": "(ii) Image Quality. As shown in Figure 6, when expanding the image, our model shows the ability to generate clear objects in the intended direction of expansion. In contrast, the baselines [2, 22, 31] often generate blurred or indis" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 174, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 174, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 174, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 187, + 167, + 422, + 278 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 159 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 159 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 159 + ], + "type": "text", + "content": "Table 2: Human evaluation with baselines. Each cell lists the winning percentage of our model versus baselines. TM is \"text matching\". IQ is \"image quality\". GC is \"global coherence\". We report only our winning percentages and omit LOSS and TIE due to space." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 187, + 167, + 422, + 278 + ], + "lines": [ + { + "bbox": [ + 187, + 167, + 422, + 278 + ], + "spans": [ + { + "bbox": [ + 187, + 167, + 422, + 278 + ], + "type": "table", + "html": "
MethodExpand × 4
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]65.0071.2075.4063.0063.4075.2063.4062.2074.20
BLD [2]71.6073.0078.4071.4070.8077.0073.2069.8076.40
PP [31]71.2074.4075.0078.1073.9073.0073.8068.0070.20
MethodExpand × 8
MS-COCOFlickrPascal
TMIQGCTMIQGCTMIQGC
SD Inp [22]70.4075.2077.8069.2069.4078.4068.2068.8076.20
BLD [2]74.6077.0080.2076.1077.3080.9075.9073.4079.10
PP [31]76.4076.2074.0078.4075.0072.0075.8076.2075.20
", + "image_path": "18b7fd86ed104159e1b6ad503a4a84e79e76ffacbf6d59c725a92544fbc70cab.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 293, + 482, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 293, + 482, + 366 + ], + "spans": [ + { + "bbox": [ + 130, + 293, + 482, + 366 + ], + "type": "text", + "content": "tinct objects. For instance, as depicted in Figure 6 (a), the image expanded by SD Inp [22] shows variations in the human form with each expansion, and the shapes of objects are not clear. Also, in the case of BLD [2], the objects of expanded image have distinct colors, but shapes such as bicycles and human in the image remain indistinct. These results show that our model exhibits better image quality compared to existing models when expanding images." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 366, + 482, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 366, + 482, + 498 + ], + "spans": [ + { + "bbox": [ + 130, + 366, + 482, + 498 + ], + "type": "text", + "content": "(iii) Global Coherence. When expanding images, it is crucial to maintain the overall visual consistency of the entire image and avoid the repetitive patterns. According to Figure 6, our model expands the images exhibiting overall harmony while encompassing a variety of content. However, in the case of the baselines, repetitive patterns are present, and it fails to maintain the overall positioning or global consistency of the image. In the Figure 6 (d), our model maintains overall harmony and generates objects reflecting the expansion of the image. However, the baselines repetitively generate \"tennis players\" or \"audiences\" without maintaining the positioning or global consistency of the expanded image. These results demonstrate that our model better reflects global consistency and overall harmony compared to the baselines when expanding images." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 131, + 507, + 256, + 518 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 507, + 256, + 518 + ], + "spans": [ + { + "bbox": [ + 131, + 507, + 256, + 518 + ], + "type": "text", + "content": "4.4 Human Evaluation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 533, + 482, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 482, + 616 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 482, + 616 + ], + "type": "text", + "content": "Because the evaluation metrics may not perfectly measure the performance of our model, we conduct a human evaluation on Amazon Mechanical Turk (AMT). For human evaluation, we randomly sample 100 generated images from each of MS-COCO [15], Flickr 8k [10], and Pascal [21] test sets, in total 300 samples. We conduct three surveys with 5 participants to compare our model with the baselines in the aspect of the text matching (TM), image quality (IQ) and global coherence (GC)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "content": "Image Extension " + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "content": " experiment. Table 2 shows the results of human evaluation on image expansion " + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "content": ". participants significantly preferred our model in terms of text matching and image quality. From a global coherence aspect, our model outperformed the baselines by a large margin. These results demonstrate" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 150, + 157, + 459, + 230 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": "Table 3: Quantitative evaluations with ablation models. " + }, + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": " corresponds to the image being expanded four times, and " + }, + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": " corresponds to the image being expanded eight times." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 150, + 157, + 459, + 230 + ], + "lines": [ + { + "bbox": [ + 150, + 157, + 459, + 230 + ], + "spans": [ + { + "bbox": [ + 150, + 157, + 459, + 230 + ], + "type": "table", + "html": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
w/o All14.6727.4010.9028.3710.6627.628.3727.426.0428.377.1427.62
w/o CLIP14.2627.5310.8028.7013.5527.748.0327.537.0628.708.3727.74
w/o LLM14.8327.4310.4428.3913.8227.639.0427.436.5928.398.8427.63
w/o GC15.5227.4211.0228.3710.5127.629.4727.426.5028.377.2727.62
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
", + "image_path": "1bd89895a68a486b2a5004c90f56a921223579661078b724a79f3508c517d5bb.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 133, + 267, + 476, + 333 + ], + "blocks": [ + { + "bbox": [ + 131, + 236, + 481, + 258 + ], + "lines": [ + { + "bbox": [ + 131, + 236, + 481, + 258 + ], + "spans": [ + { + "bbox": [ + 131, + 236, + 481, + 258 + ], + "type": "text", + "content": "Table 4: Quantitative evaluations with baselines with the LLM. We compare with baselines with local captions generated by the LLM instead of global captions." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 267, + 476, + 333 + ], + "lines": [ + { + "bbox": [ + 133, + 267, + 476, + 333 + ], + "spans": [ + { + "bbox": [ + 133, + 267, + 476, + 333 + ], + "type": "table", + "html": "
MethodExpand × 4Expand × 8
MS-COCOFlickrPascalMS-COCOFlickrPascal
ISCLIPISCLIPISCLIPISCLIPISCLIPISCLIP
SDInp w/ LLM [22]13.7427.7011.0128.7713.6827.888.5927.707.1928.778.7927.88
BLD w/ LLM [2]15.7227.418.8328.6110.0627.649.4727.414.9928.616.7527.64
PP w/ LLM [31]12.6527.428.7028.378.5027.637.4727.424.9828.375.6627.63
Ours16.0527.9411.0428.8315.0728.079.9727.947.2528.839.3628.07
", + "image_path": "93d57ca9ee0b10927d5fc4b79bd0e14868dc6241781c782aaf4d1c59e34ddda3.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 350, + 480, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 350, + 480, + 373 + ], + "spans": [ + { + "bbox": [ + 130, + 350, + 480, + 373 + ], + "type": "text", + "content": "that our model reflects text alignment, image quality and visual consistency much better than the baselines." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "spans": [ + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "content": "Image Extension " + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "content": " experiment. Table 2 shows the results of human evaluation on image expansion " + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "content": ": similar to the human evaluation of image extension " + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "content": ", participants significantly preferred our model by a substantial margin. Furthermore, the number of participants who preferred our model was higher in extension " + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "inline_equation", + "content": "\\times 8" + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "content": " than in extension " + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "inline_equation", + "content": "\\times 4" + }, + { + "bbox": [ + 130, + 375, + 482, + 447 + ], + "type": "text", + "content": ". These results indicate that as images are expanded, our model show better performance than the baseline in all aspects." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 468, + 239, + 480 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 468, + 239, + 480 + ], + "spans": [ + { + "bbox": [ + 131, + 468, + 239, + 480 + ], + "type": "text", + "content": "4.5 Ablation Study" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 491, + 482, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 491, + 482, + 540 + ], + "spans": [ + { + "bbox": [ + 130, + 491, + 482, + 540 + ], + "type": "text", + "content": "To explore the impact of the proposed components, we conduct an ablation study with different models. Also we provide the human evaluation results in the supplementary material, which show that our model is preferred than ablated models. All experimental settings are the same as in Section 4.1 and Section 4.4." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 559, + 482, + 679 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 559, + 482, + 679 + ], + "spans": [ + { + "bbox": [ + 130, + 559, + 482, + 679 + ], + "type": "text", + "content": "Effect of the LLM guidance and CLIP visual feature. To see the effect of the LLM guidance and CLIP visual feature, we compare our model with the w/o all model which generates an image with only a global caption. In Figure 7, the w/o all model simply reflects the keywords of the global caption, while failing to maintain global consistency and diverse context. This indicates that the w/o all model expands an image repetitively that depicts the same content without considering the overall structure. As shown in Table 3, our model outperforms the w/o all model in both IS [23] and CLIPSIM [20]. This indicates that our model can expand image better than the w/o all model in aspect of image quality and text faithfulness." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 134, + 115, + 303, + 464 + ], + "blocks": [ + { + "bbox": [ + 134, + 115, + 303, + 464 + ], + "lines": [ + { + "bbox": [ + 134, + 115, + 303, + 464 + ], + "spans": [ + { + "bbox": [ + 134, + 115, + 303, + 464 + ], + "type": "image", + "image_path": "f177fbf09a5aecbd121fa9a122761322b5a00ded4cc682457c311f9fc66592c0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "lines": [ + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "spans": [ + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "type": "text", + "content": "Fig. 7: Comparison of generated image results between our ablation models. We expand the image eight times. The expanded image has a resolution of " + }, + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "type": "inline_equation", + "content": "512 \\times 2560" + }, + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "type": "inline_equation", + "content": "2560 \\times 512" + }, + { + "bbox": [ + 130, + 474, + 481, + 508 + ], + "type": "text", + "content": ". The red box is the given local image." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 306, + 115, + 480, + 464 + ], + "blocks": [ + { + "bbox": [ + 306, + 115, + 480, + 464 + ], + "lines": [ + { + "bbox": [ + 306, + 115, + 480, + 464 + ], + "spans": [ + { + "bbox": [ + 306, + 115, + 480, + 464 + ], + "type": "image", + "image_path": "862953f928cfd9119bacf372a474ddef5f1225f74f9d8a30fb21a17f6cca2352.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 533, + 482, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 482, + 677 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 482, + 677 + ], + "type": "text", + "content": "Effect of the local caption with LLM guidance. We compare our model with the w/o LLM model which generates an image with a global caption and the CLIP visual feature. In Figure 7, the w/o LLM model fails to incorporate content beyond the global caption since it is conditioned only on the global caption as a textual condition. Also, the extended image does not appear as a single image but rather as a collage of the images. For example, in Figure 7 (d), our model expands the image by imagining the full view of the \"baseball stadium with spectators\" whereas the w/o LLM model extends the image by repeating the \"baseball game\" image. In Table 3, our model outperforms the w/o LLM model in both IS [23] and CLIPSIM [20]. This shows that our model can expand image with better quality and text faithfulness comparing to the w/o LLM model." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 100 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 133, + 243, + 305, + 289 + ], + "blocks": [ + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "lines": [ + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "spans": [ + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "text", + "content": "Table 5: Quantitative evaluations with different architectures on MS-COCO dataset. The All in MLP model gets all conditions through cross-attention using a compressed vector by the MLP " + }, + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "inline_equation", + "content": "(77\\times 768)" + }, + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "text", + "content": ". The All in cross-attention model gets all conditions directly through cross-attention " + }, + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "inline_equation", + "content": "(231\\times 768)" + }, + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "text", + "content": ". Our model gets the textual condition, a vector compressed by the MLP, and the visual condition through cross-attention " + }, + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "inline_equation", + "content": "(154\\times 768)" + }, + { + "bbox": [ + 132, + 114, + 309, + 234 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 133, + 243, + 305, + 289 + ], + "lines": [ + { + "bbox": [ + 133, + 243, + 305, + 289 + ], + "spans": [ + { + "bbox": [ + 133, + 243, + 305, + 289 + ], + "type": "table", + "html": "
Expand × 4Expand × 8
ISCLIPISCLIP
All in MLP15.5727.519.1127.51
All in cross attention15.0227.429.7527.42
Ours16.0527.949.9727.94
", + "image_path": "e809381daca1a35b3b8673d04f5259ec5ff678380877a2a438b4cc5cece62590.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 321, + 115, + 476, + 277 + ], + "blocks": [ + { + "bbox": [ + 321, + 115, + 476, + 277 + ], + "lines": [ + { + "bbox": [ + 321, + 115, + 476, + 277 + ], + "spans": [ + { + "bbox": [ + 321, + 115, + 476, + 277 + ], + "type": "image", + "image_path": "5425224a7a6ed8b72d5410c38bd1e46dee4a3dd506b9be88d1959b54e5f825b3.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 318, + 278, + 478, + 312 + ], + "lines": [ + { + "bbox": [ + 318, + 278, + 478, + 312 + ], + "spans": [ + { + "bbox": [ + 318, + 278, + 478, + 312 + ], + "type": "text", + "content": "Fig. 8: Qualitative evaluations with different architectures The red box is the given local image." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 331, + 482, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 331, + 482, + 475 + ], + "spans": [ + { + "bbox": [ + 130, + 331, + 482, + 475 + ], + "type": "text", + "content": "Effect of the CLIP visual feature. We compare our model with the w/o CLIP model which generates an image with a global caption and a local caption generated with the LLM. In Figure 7, comparing with our model, the w/o CLIP model often generates images with slightly lower image quality and global consistency, as it does not consider the visual feature of the overall expanded image. Figure 7 shows that the w/o CLIP model is unable to enhance the image while maintaining visual coherence. In Table 3, our model outperforms the w/o CLIP model in terms of the IS. This demonstrates that the CLIP visual feature helps the model to generate an image with better image quality. Also for CLIPSIM [20], even though the w/o CLIP model is conditioned on both global and local captions, our model generates an image that closely matches with the global caption." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 479, + 482, + 588 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 479, + 482, + 588 + ], + "spans": [ + { + "bbox": [ + 130, + 479, + 482, + 588 + ], + "type": "text", + "content": "Effect of the global caption. We compare our model with the w/o GC model which generates an image with a local caption generated with the LLM and CLIP visual feature. Figure 7 shows that, in comparison to our model, the w/o GC model generates images that do not maintain global consistency well. Also, since it does not consider the global context of the expanded image, the expanded images fail to maintain overall harmony. In Table 3, our model outperforms the w/o GC model in terms of IS and CLIPSIM. This demonstrates that the our model can generate images that maintain global consistency by effectively reflecting the global caption." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "type": "text", + "content": "Effect of mask ratio. To explore various masking behaviors, we train our model on the dataset with a masking ratio of 3:1. As shown in Figure 8 (c), we found that although we can generate more content at once, it becomes more challenging to maintain global consistency when the provided(unmasked) input content gets smaller. This result demonstrates that our mask ratio is effective." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 189 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 189 + ], + "type": "text", + "content": "Effect of LLM guidance for baselines. Our proposed method can effectively expand an image using both the LLM and the diffusion model. To explore its effectiveness, we compare our model with the baselines using local captions generated by the LLM instead of global captions. Table 4 shows that our model outperforms the baselines with the LLM. These results demonstrate the effectiveness of our architecture for this task, enhanced by the guidance of the LLM." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 214, + 350, + 228 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 214, + 350, + 228 + ], + "spans": [ + { + "bbox": [ + 132, + 214, + 350, + 228 + ], + "type": "text", + "content": "4.6 Exploring Other Model Architectures" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "spans": [ + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "type": "text", + "content": "We explore the effect of our model architecture by comparing with two alternative model architectures: 1) In the all-in MLP model, we compress the global caption, local caption and CLIP visual feature by the MLP layer, as a compressed vector " + }, + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "type": "inline_equation", + "content": "(77 \\times 768)" + }, + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "type": "text", + "content": " then the model generates an image conditioned on the vector. 2) In the all-in cross attention model, we concatenate the global caption, local caption and CLIP visual feature " + }, + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "type": "inline_equation", + "content": "(231 \\times 768)" + }, + { + "bbox": [ + 130, + 243, + 482, + 326 + ], + "type": "text", + "content": " then the model generates an image conditioned on the concatenated vector through the expanded U-Net." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 328, + 482, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 328, + 482, + 399 + ], + "spans": [ + { + "bbox": [ + 130, + 328, + 482, + 399 + ], + "type": "text", + "content": "In Figure 8 (a), the all-in MLP model produces images with blurred edges and indistinct objects, likely due to difficulty in representing both textual and visual features. Figure 8 (b) shows the all-in cross-attention model generating repetitive \"berry\" images, possibly influenced by textual content. In Figure 8 (c), our model achieves semantic and visual consistency with both global and local captions." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 401, + 482, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 401, + 482, + 437 + ], + "spans": [ + { + "bbox": [ + 130, + 401, + 482, + 437 + ], + "type": "text", + "content": "In Table 5, our model performs better than the all-in MLP and all-in cross-attention model in both IS [23] and CLIPSIM [20]. This shows that our model architecture can reflect the content of text and visual features effectively." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 464, + 312, + 478 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 464, + 312, + 478 + ], + "spans": [ + { + "bbox": [ + 132, + 464, + 312, + 478 + ], + "type": "text", + "content": "5 Conclusion and Limitation" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 498, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 498, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 498, + 482, + 666 + ], + "type": "text", + "content": "In this work, we propose a novel zero-shot text-guided image outpainting model by addressing the two main challenges: 1) the lack of high-resolution text-image paired datasets that have rich context; 2) preserving global coherence and understanding the context. In contrast to prior research, which generates images in limited categories, we leverage the LLMs to imagine the outside scene of the given image. During inference, we utilize LLMs to generate imaginary prompts to expand images. This allows us to expand the image to arbitrary size with diverse contexts. Additionally, by conditioning on the visual context, we can maintain global consistency and spatial local context. The experimental results demonstrate that our model can extend images arbitrarily in a zero-shot manner, and it offers promising opportunities for text-guided image outpainting approaches. Our model has a limitation as it relies on a pre-trained text-to-image model, but the generated images can contain rich visual contents. For future work, we will expand to image outpainting through stories or other modalities, such as sound." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 224 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 224 + ], + "type": "text", + "content": "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00608, Artificial intelligence research about multi-modal interactions for empathetic conversations with humans & No.RS-2020-II201336, Artificial Intelligence graduate school support(UNIST)) and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00219959)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 243, + 197, + 255 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 243, + 197, + 255 + ], + "spans": [ + { + "bbox": [ + 133, + 243, + 197, + 255 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 134, + 269, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 138, + 269, + 481, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 269, + 481, + 312 + ], + "spans": [ + { + "bbox": [ + 138, + 269, + 481, + 312 + ], + "type": "text", + "content": "1. Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., et al.: Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems 35, 23716-23736 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 313, + 481, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 313, + 481, + 346 + ], + "spans": [ + { + "bbox": [ + 138, + 313, + 481, + 346 + ], + "type": "text", + "content": "2. Avrahami, O., Lischinski, D., Fried, O.: Blended diffusion for text-driven editing of natural images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 18208-18218 (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 346, + 481, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 346, + 481, + 379 + ], + "spans": [ + { + "bbox": [ + 138, + 346, + 481, + 379 + ], + "type": "text", + "content": "3. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877-1901 (2020)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 380, + 481, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 380, + 481, + 412 + ], + "spans": [ + { + "bbox": [ + 138, + 380, + 481, + 412 + ], + "type": "text", + "content": "4. Cheng, Y.C., Lin, C.H., Lee, H.Y., Ren, J., Tulyakov, S., Yang, M.H.: Inout: Diverse image outpainting via gan inversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11431-11440 (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 412, + 481, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 412, + 481, + 434 + ], + "spans": [ + { + "bbox": [ + 138, + 412, + 481, + 434 + ], + "type": "text", + "content": "5. Demir, U., Unal, G.: Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:1803.07422 (2018)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 434, + 481, + 467 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 434, + 481, + 467 + ], + "spans": [ + { + "bbox": [ + 138, + 434, + 481, + 467 + ], + "type": "text", + "content": "6. Ding, Z., Zhang, M., Wu, J., Tu, Z.: Patched denoising diffusion models for high-resolution image synthesis. In: The Twelfth International Conference on Learning Representations (2023)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 468, + 481, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 468, + 481, + 500 + ], + "spans": [ + { + "bbox": [ + 138, + 468, + 481, + 500 + ], + "type": "text", + "content": "7. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: Proceedings of the seventh IEEE international conference on computer vision. vol. 2, pp. 1033-1038. IEEE (1999)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 501, + 481, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 501, + 481, + 533 + ], + "spans": [ + { + "bbox": [ + 138, + 501, + 481, + 533 + ], + "type": "text", + "content": "8. Esser, P., Rombach, R., Blattmann, A., Ommer, B.: Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in neural information processing systems 34, 3518-3532 (2021)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 534, + 481, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 534, + 481, + 577 + ], + "spans": [ + { + "bbox": [ + 138, + 534, + 481, + 577 + ], + "type": "text", + "content": "9. Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.: From images to textual prompts: Zero-shot visual question answering with frozen large language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10867-10877 (2023)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 134, + 578, + 481, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 578, + 481, + 610 + ], + "spans": [ + { + "bbox": [ + 134, + 578, + 481, + 610 + ], + "type": "text", + "content": "10. Hodosh, M., Young, P., Hockenmaier, J.: Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research 47, 853-899 (2013)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 134, + 611, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 611, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 134, + 611, + 481, + 632 + ], + "type": "text", + "content": "11. Kopf, J., Kienzle, W., Drucker, S., Kang, S.B.: Quality prediction for image completion. ACM Transactions on Graphics (ToG) 31(6), 1-8 (2012)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 134, + 633, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 633, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 134, + 633, + 481, + 665 + ], + "type": "text", + "content": "12. Li, Z., Wang, Q., Snavely, N., Kanazawa, A.: Infinitenature-zero: Learning perpetual view generation of natural scenes from single images. In: European Conference on Computer Vision. pp. 515-534. Springer (2022)" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 481, + 666 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 133, + 116, + 481, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 481, + 161 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 481, + 161 + ], + "type": "text", + "content": "13. Liang, J., Wu, C., Hu, X., Gan, Z., Wang, J., Wang, L., Liu, Z., Fang, Y., Duan, N.: Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis. Advances in Neural Information Processing Systems 35, 15420-15432 (2022)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 162, + 481, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 162, + 481, + 184 + ], + "spans": [ + { + "bbox": [ + 132, + 162, + 481, + 184 + ], + "type": "text", + "content": "14. Lin, C.H., Lee, H.Y., Cheng, Y.C., Tulyakov, S., Yang, M.H.: Infinitygan: Towards infinite-pixel image synthesis. arXiv preprint arXiv:2104.03963 (2021)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 185, + 481, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 185, + 481, + 228 + ], + "spans": [ + { + "bbox": [ + 132, + 185, + 481, + 228 + ], + "type": "text", + "content": "15. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13. pp. 740-755. Springer (2014)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 229, + 481, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 229, + 481, + 251 + ], + "spans": [ + { + "bbox": [ + 132, + 229, + 481, + 251 + ], + "type": "text", + "content": "16. Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual instruction tuning. Advances in neural information processing systems 36 (2024)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 252, + 481, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 252, + 481, + 285 + ], + "spans": [ + { + "bbox": [ + 132, + 252, + 481, + 285 + ], + "type": "text", + "content": "17. Liu, H., Wan, Z., Huang, W., Song, Y., Han, X., Liao, J.: Pd-gan: Probabilistic diverse gan for image inpainting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 9371-9381 (2021)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 285, + 481, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 285, + 481, + 319 + ], + "spans": [ + { + "bbox": [ + 132, + 285, + 481, + 319 + ], + "type": "text", + "content": "18. Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., Chen, M.: Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 319, + 481, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 319, + 481, + 352 + ], + "spans": [ + { + "bbox": [ + 132, + 319, + 481, + 352 + ], + "type": "text", + "content": "19. Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J., Rombach, R.: Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 (2023)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 353, + 481, + 397 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 353, + 481, + 397 + ], + "spans": [ + { + "bbox": [ + 132, + 353, + 481, + 397 + ], + "type": "text", + "content": "20. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748-8763. PMLR (2021)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 398, + 481, + 442 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 398, + 481, + 442 + ], + "spans": [ + { + "bbox": [ + 132, + 398, + 481, + 442 + ], + "type": "text", + "content": "21. Rashtchian, C., Young, P., Hodosh, M., Hockenmaier, J.: Collecting image annotations using amazon's mechanical turk. In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's Mechanical Turk. pp. 139-147 (2010)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 443, + 481, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 443, + 481, + 475 + ], + "spans": [ + { + "bbox": [ + 132, + 443, + 481, + 475 + ], + "type": "text", + "content": "22. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10684-10695 (2022)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 476, + 481, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 476, + 481, + 509 + ], + "spans": [ + { + "bbox": [ + 132, + 476, + 481, + 509 + ], + "type": "text", + "content": "23. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. Advances in neural information processing systems 29 (2016)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 510, + 481, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 510, + 481, + 553 + ], + "spans": [ + { + "bbox": [ + 132, + 510, + 481, + 553 + ], + "type": "text", + "content": "24. Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al.: Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, 25278-25294 (2022)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 555, + 481, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 555, + 481, + 598 + ], + "spans": [ + { + "bbox": [ + 132, + 555, + 481, + 598 + ], + "type": "text", + "content": "25. Sivic, J., Kaneva, B., Torralba, A., Avidan, S., Freeman, W.T.: Creating and exploring a large photorealistic virtual space. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. pp. 1-8. IEEE (2008)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 599, + 481, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 599, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 132, + 599, + 481, + 666 + ], + "type": "text", + "content": "26. Tsimpoukelli, M., Menick, J.L., Cabi, S., Eslami, S.M.A., Vinyals, O., Hill, F.: Multimodal few-shot learning with frozen language models. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems. vol. 34, pp. 200-212. Curran Associates, Inc. (2021), https://proceedings.neurips.cc/paper_files/paper/2021/file/01b7575c38dac42f3cbf7d500438b875-Paper.pdf" + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 214, + 101 + ], + "type": "text", + "content": "Kwon et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 482, + 281 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 132, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 482, + 149 + ], + "type": "text", + "content": "27. Wang, M., Lai, Y.K., Liang, Y., Martin, R.R., Hu, S.M.: Biggerpicture: data-driven image extrapolation using graph matching. ACM Transactions on Graphics 33(6) (2014)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 150, + 482, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 150, + 482, + 182 + ], + "spans": [ + { + "bbox": [ + 132, + 150, + 482, + 182 + ], + "type": "text", + "content": "28. Yang, Z., Gan, Z., Wang, J., Hu, X., Lu, Y., Liu, Z., Wang, L.: An empirical study of gpt-3 for few-shot knowledge-based vqa. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 3081-3089 (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 182, + 482, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 182, + 482, + 215 + ], + "spans": [ + { + "bbox": [ + 132, + 182, + 482, + 215 + ], + "type": "text", + "content": "29. Yildirim, A.B., Pehlivan, H., Bilecen, B.B., Dundar, A.: Diverse inpainting and editing with gan inversion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 23120-23130 (2023)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 216, + 482, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 216, + 482, + 248 + ], + "spans": [ + { + "bbox": [ + 132, + 216, + 482, + 248 + ], + "type": "text", + "content": "30. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1452-1464 (2017)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 249, + 482, + 281 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 249, + 482, + 281 + ], + "spans": [ + { + "bbox": [ + 132, + 249, + 482, + 281 + ], + "type": "text", + "content": "31. Zhuang, J., Zeng, Y., Liu, W., Yuan, C., Chen, K.: A task is worth one word: Learning with task prompts for high-quality versatile image inpainting. arXiv preprint arXiv:2312.03594 (2023)" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "spans": [ + { + "bbox": [ + 173, + 91, + 448, + 102 + ], + "type": "text", + "content": "Zero-shot Text-guided Infinite Image Synthesis with LLM guidance" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 481, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_content_list.json b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..58b86f876921f90be67f1866cacf03e805048ac8 --- /dev/null +++ b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_content_list.json @@ -0,0 +1,1716 @@ +[ + { + "type": "text", + "text": "ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video", + "text_level": 1, + "bbox": [ + 238, + 140, + 767, + 186 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Xinhao Li $^{1,2}$ , Yuhan Zhu $^{1}$ , and Limin Wang $^{1,2*}$", + "bbox": [ + 310, + 210, + 691, + 228 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 State Key Laboratory for Novel Software Technology, Nanjing University", + "bbox": [ + 248, + 238, + 754, + 253 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "2 Shanghai AI Laboratory", + "bbox": [ + 411, + 253, + 591, + 268 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "xinhaoli00@outlook.com zyuhan0812@gmail.com lmwang@nju.edu.cn", + "bbox": [ + 248, + 268, + 753, + 282 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "https://github.com/MCG-NJU/ZeroI2V", + "bbox": [ + 367, + 282, + 635, + 295 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Adapting image models to the video domain has emerged as an efficient paradigm for solving video recognition tasks. Due to the huge number of parameters and effective transferability of image models, performing full fine-tuning is less efficient and even unnecessary. Thus, recent research is shifting its focus toward parameter-efficient image-to-video adaptation. However, these adaptation strategies inevitably introduce extra computational costs to deal with the domain gap and temporal modeling in videos. In this paper, we present a new adaptation paradigm (ZeroI2V) to transfer the image transformers to video recognition tasks (i.e., introduce zero extra cost to the original models during inference). To achieve this goal, we present two core designs. First, to capture the dynamics in videos and reduce the difficulty of image-to-video adaptation, we exploit the flexibility of self-attention and introduce spatial-temporal dual-headed attention (STDHA). This approach efficiently endows the image transformers with temporal modeling capability at zero extra parameters and computation. Second, to handle the domain gap between images and videos, we propose a linear adaption strategy that utilizes lightweight densely placed linear adapters to fully transfer the frozen image models to video recognition. Thanks to the customized linear design, all newly added adapters could be easily merged with the original modules through structural reparameterization after training, enabling zero extra cost during inference. Extensive experiments on representative fully-supervised and few-shot video recognition benchmarks showcase that ZeroI2V can match or even outperform previous state-of-the-art methods while enjoying superior parameter and inference efficiency.", + "bbox": [ + 259, + 335, + 743, + 681 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Video understanding $\\cdot$ Image-to-video adaptation $\\cdot$ PEFT", + "bbox": [ + 261, + 695, + 736, + 709 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 215, + 737, + 377, + 752 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Adapting pre-trained foundation models such as BERT [11] and GPT [5, 52, 53] through efficient strategies has yielded excellent performance on downstream tasks in natural language understanding. This new paradigm is becoming popular in", + "bbox": [ + 212, + 768, + 787, + 816 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Corresponding author.", + "bbox": [ + 217, + 824, + 385, + 840 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/9d885033d26f55992ceb6d9dd3af76d211ba17d868f74ae09e14e0fe9ce020f5.jpg", + "image_caption": [ + "Fig. 1: Left: Our proposed image-to-video transfer learning method. Right: Comparison of PETL methods on SSv2 validation set. For a more intuitive comparison, the views of the methods in the figure are all $8 \\times 3 \\times 1$ . Two core techniques enable us to achieve superior performance on video tasks without introducing additional computation and parameters during inference." + ], + "image_footnote": [], + "bbox": [ + 243, + 146, + 498, + 320 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/3a140e021a7c64e10bc70d0d3348dd3dab076997e9ac9bc9482cb5ef8c8708d3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 509, + 148, + 771, + 320 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "computer vision due to the available pre-trained image models such as CLIP [51] and DINO [7, 47]. These models could be easily adapted to downstream tasks through linear probes, fine-tuning, or even zero-shot recognition, exhibiting robustness and strong transfer capabilities similar to those of large-scale language models. Recently, parameter-efficient transfer learning (PETL) [9,23,38,46,48,78] is becoming an efficient paradigm to adapt these large pre-trained models due to their huge numbers of parameters and high computational cost of full fine-tuning.", + "bbox": [ + 212, + 440, + 787, + 547 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "For video understanding, there exist several large pre-trained video models [56, 59] from self-supervised learning, but these models are of high computational complexity due to the joint spatiotemporal attentions. Therefore, adapting pretrained image models to the video domain through efficient strategies is still a practical solution to video recognition. In fact, the state-of-the-art video networks have long relied on the pre-trained image models by inflating the kernels [1,8,39,41] or inserting plug-and-play temporal modules [33,37,42,60,61]. However, most of these methods necessitate full fine-tuning, which involves updating all the model parameters during training on video datasets. As the scale of pre-trained models increases, full fine-tuning becomes impractical due to the high training costs and the risk of overfitting or even catastrophic forgetting when the downstream data is limited. In addition, these methods often inevitably introduce extra costs to the adapted video models due to these newly added modules.", + "bbox": [ + 212, + 549, + 787, + 746 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we aim to present a new efficient paradigm of adapting image transformers to video downstream tasks with two main objectives. First, inspired by the PETL methods in NLP [21,22,26,31] and image understanding [9,23,46], we aim to devise a parameter-efficient transfer technique from image to video, which can effectively reduce the risk of over-fitting and greatly improve the training efficiency. Second, to overcome the issue of high computation in the adapted", + "bbox": [ + 212, + 750, + 787, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "video models, we try to present a new adaptation method without introducing any extra computations to the final video models during inference. This zero extra inference cost adaptation would allow for more efficient deployment of transferred video models in real applications.", + "bbox": [ + 212, + 146, + 787, + 205 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To achieve the above two objectives, we propose a novel transfer learning method (as shown in Figure 1) that can utilize the off-the-shelf pre-trained image transformers to achieve excellent performance on video tasks without additional parameters and computation during inference. To be specific, for the temporal modeling required for video tasks, we transform multi-head self-attention into spatio-temporal dual-head attention (STDHA) by reassigning some heads to achieve temporal modeling at zero computation and zero parameters. For image-to-video transfer, we explore the strategy of using linear adapters to fully adapt the parameters of each part of the model and merge them with the frozen original parameters through structural reparameterization after training, thus achieving zero extra cost during inference.", + "bbox": [ + 212, + 207, + 789, + 372 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To summarize, we make the following contributions: 1) We propose a new approach for parameter-efficient image-to-video transfer learning that can achieve the efficient adaptation of transformers from image to video without introducing additional computation and parameters during inference. 2) We introduce a novel attention mechanism named Spatial-Temporal Dual-Headed Attention (STDHA), which utilizes the flexibility of self-attention to achieve temporal modeling without introducing extra computation and parameters. 3) To the best of our knowledge, we are the first to investigate the achievement of zero extra inference cost image-to-video adaptation through the utilization of a linear structure. We establish an empirical study by conducting extensive experiments with a diverse range of adaptation strategies. 4) Our method achieves comparable or even better performance than state-of-the-art methods on popular fully-supervised and few-shot video recognition benchmarks while enjoying the advantage of parameter and inference efficiency.", + "bbox": [ + 212, + 375, + 789, + 587 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related work", + "text_level": 1, + "bbox": [ + 215, + 609, + 382, + 625 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Pre-trained image transformers The powerful scalability of ViT [12] brings more possibilities to the pre-trained image model. In addition to the traditional supervised approach [12,40,73], recent works [3,7,18,19,47] utilize self-supervised learning to effectively learn representations from unlabeled data. Moreover, several works [10,27,51,57] adopt large-scale multi-modal data (e.g., text-image pairs) to learn visual representations with great transferability. Our proposed adaptation strategy can leverage these off-the-shelf pre-trained image transformers to achieve outstanding performance on video tasks.", + "bbox": [ + 212, + 642, + 787, + 762 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Video action recognition is crucial for downstream tasks [55, 79]. Traditionally, state-of-the-art methods have long relied on image models. Previous works for action recognition can be classified into two categories: one is to extend the image model for spatial-temporal modeling by inflating weights and structures [8, 13-15, 28, 34, 41], while the other is to directly utilize the image model as the", + "bbox": [ + 212, + 763, + 787, + 839 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "backbone and insert plug-and-play modules for temporal modeling [37, 42, 60, 61, 77]. Following the success of new training paradigms in image understanding, several works have attempted to learn transferable video representations via self-supervised learning [43, 56, 59, 63] or multi-modal video-text pre-training [29, 30, 45, 62]. However, the above methods usually require full fine-tuning of the entire model or training from scratch, resulting in high training costs and additional computational overhead. In this work, we avoid the above problems by adapting the pre-trained image transformers to video tasks in an efficient manner.", + "bbox": [ + 212, + 146, + 787, + 267 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Parameter-efficient transfer learning To address the issue of training inefficiency caused by the continuous growth of model size, Parameter-efficient transfer learning (PETL) is initially introduced in NLP [21, 22, 26, 31, 49, 50, 72] and subsequently applied to vision tasks [9, 20, 23, 36, 46, 68, 69, 78]. These techniques aim to achieve comparable or even superior performance on other tasks by fine-tuning only a small subset of trainable parameters. Most PETL methods [9, 20, 23, 36, 46, 76, 78] in vision domain are limited to transfer within the same modality (e.g., image-to-image or video-to-video). In contrast, our research focuses on image-to-video transfer learning. Despite progress made by recent studies [38, 48, 71], these methods require additional computation and parameters for temporal modeling of video tasks and image-to-video adaptation. For example, AVL [38] incorporates an additional temporal transformer decoder, while ST-Adapter [48] introduces additional adapters with depth-wise 3D convolution layers. Similarly, AIM [71] adds extra adapters and necessitates an additional time attention calculation at each block. In contrast to previous works, our proposed method eschews the introduction of additional computation or parameters during inference, yet still achieves comparable or superior performance compared to previous methods.", + "bbox": [ + 212, + 268, + 787, + 541 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Methodology", + "text_level": 1, + "bbox": [ + 215, + 565, + 380, + 583 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this section, we first briefly revisit the basic block of ViT (Sec. 3.1), and then discuss how to utilize the flexibility of self-attention to achieve temporal modeling without introducing additional computation and parameters (Sec. 3.2). Finally, we explain how we implement zero-cost image-to-video adaptation with a serial linear structure (Sec. 3.3).", + "bbox": [ + 212, + 599, + 787, + 676 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Preliminary", + "text_level": 1, + "bbox": [ + 215, + 700, + 362, + 715 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The original ViT [12] block consists of two network layers: multi-head self-attention (MHSA) and multi-layer perceptron (MLP). As shown in Figure 1, a ViT block consists of MHSA and MLP connected in series in a residual structure:", + "bbox": [ + 212, + 727, + 787, + 772 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nz _ {l} = x _ {l} + \\operatorname {M H S A} (\\ln (x _ {l})), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 418, + 801, + 785, + 816 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nx _ {l + 1} = z _ {l} + \\operatorname {M L P} (\\ln (z _ {l})), \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 398, + 820, + 785, + 835 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/524b5aa9d19533adeb59ad91e6c63388c164c54a738242c8ea1e3c4964d9ebbe.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 223, + 179, + 483, + 297 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/f741fddc09ed19e5387c109fd781d3b0371b69bf92493ebdc486d10a532963b3.jpg", + "image_caption": [ + "(a) Layer merging via reparameterization", + "(b) Spatial-temporal dual-headed attention", + "Fig. 2: Illustration of the proposed linear adaptation and STDHA." + ], + "image_footnote": [], + "bbox": [ + 488, + 147, + 754, + 300 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where LN denotes layer normalization [2] and $x_{l}$ represents the input to the $l$ -th ViT block. We review their specific implementation details. For the sake of simplicity, we ignore the bias and denote $X \\in \\mathbb{R}^{n \\times d}$ as input of MHSA and MLP.", + "bbox": [ + 212, + 366, + 785, + 411 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "MHSA first performs three different linear projections $W_{\\mathrm{attn}}^{Q}, W_{\\mathrm{attn}}^{K}, W_{\\mathrm{attn}}^{V} \\in \\mathbb{R}^{d \\times d}$ on the input $X$ to obtain the query $Q$ and key-value pairs $K, V$ . These are then evenly divided into $h$ heads by channel. Each head independently performs the scaled dot-product attention calculation. Finally, the heads are concatenated by channel and then a linear projection $W_{\\mathrm{attn}}^{O} \\in \\mathbb{R}^{d \\times d}$ is performed to obtain the final calculation result:", + "bbox": [ + 212, + 411, + 785, + 501 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nQ, K, V = X W _ {\\mathrm {a t t n}} ^ {Q}, X W _ {\\mathrm {a t t n}} ^ {K}, X W _ {\\mathrm {a t t n}} ^ {V}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 354, + 512, + 785, + 531 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {h e a d} _ {i} = \\operatorname {A t t e n t i o n} \\left(Q _ {i}, K _ {i}, V _ {i}\\right), \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 370, + 534, + 785, + 550 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {M H S A} (X) = \\operatorname {C o n c a t} \\left(\\operatorname {h e a d} _ {1}, \\dots , \\operatorname {h e a d} _ {h}\\right) W _ {\\mathrm {a t t n}} ^ {O}. \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 334, + 553, + 785, + 571 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "MLP involves two linear projections $W_{\\mathrm{mlp}}^{\\mathrm{up}} \\in \\mathbb{R}^{d \\times d'}$ , $W_{\\mathrm{mlp}}^{\\mathrm{down}} \\in \\mathbb{R}^{d' \\times d}$ , $d' > d$ and one non-linear activation function $\\sigma$ :", + "bbox": [ + 212, + 583, + 785, + 614 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {M L P} (X) = \\sigma \\left(X W _ {\\mathrm {m l p}} ^ {\\mathrm {u p}}\\right) W _ {\\mathrm {m l p}} ^ {\\mathrm {d o w n}}. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 390, + 626, + 785, + 646 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Zero-Cost temporal modeling", + "text_level": 1, + "bbox": [ + 214, + 678, + 504, + 694 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Applying image models to video tasks often requires the incorporation of additional modules for temporal modeling, which not only introduces additional parameters and computation, but also results in additional training costs. In this work, we address temporal modeling from three key perspectives: (1) Capability of capturing the temporal dynamics. (2) Reducing the difficulty of image-to-video adaptation. (3) Minimizing the introduction of additional computation and parameters compared to the original model. [44] suggests that most heads are redundant given the rest of the model. Inspired by this, we attempt to reassign some heads as temporal heads in the multi-head attention to perform temporal", + "bbox": [ + 212, + 704, + 787, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "modeling tasks, while the remaining heads continue to perform spatial modeling tasks as spatial heads, thereby achieving efficient spatial-temporal modeling.", + "bbox": [ + 212, + 146, + 782, + 176 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Spatial-temporal dual-headed attention (STDHA) As shown in Figure 2b, consider an input sequence $X = \\{x_{1}, x_{2}, \\dots, x_{T}\\}$ where $x_{t} \\in \\mathbb{R}^{n \\times d}$ . Let the query and key-value pairs obtained after the linear projection of the $x_{t}$ be $Q^{t}, K^{t}, V^{t} \\in \\mathbb{R}^{n \\times d}$ . We divide the $h$ heads of the MHSA into two groups of size $h - k$ and $k$ . One group of heads queries the key-value pairs at the current time $t$ to perform spatial modeling, while the other group of heads queries the key-value pairs at other times $t + \\Delta t_{i}$ to perform temporal modeling. Finally, the information from the two groups of heads is aggregated by a linear projection to perform spatial-temporal modeling:", + "bbox": [ + 212, + 176, + 787, + 313 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\text {S - h e a d} _ {i} = \\text {A t t e n t i o n} \\left(Q _ {i} ^ {t}, K _ {i} ^ {t}, V _ {i} ^ {t}\\right), \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 256, + 323, + 787, + 340 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\text {T - h e a d} _ {i} = \\operatorname {A t t e n t i o n} \\left(Q _ {i} ^ {t}, K _ {i} ^ {t + \\Delta t _ {i}}, V _ {i} ^ {t + \\Delta t _ {i}}\\right) (\\Delta t _ {i} \\neq 0), \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 254, + 343, + 785, + 362 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {S T D H A} (X) = \\operatorname {C o n c a t} (\\mathrm {T} - \\text {h e a d} _ {1}, \\dots , \\mathrm {T} - \\text {h e a d} _ {k}, \\mathrm {S} - \\text {h e a d} _ {k + 1} \\dots \\mathrm {S} - \\text {h e a d} _ {h}) W _ {\\text {a t t n}} ^ {O}, \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 223, + 364, + 785, + 383 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $\\Delta t_{i}$ represents the time offset of the key-value pair of the $i$ -th head. We did not directly use temporal attention or temporal convolution for the temporal modeling like previous works [38, 48, 71]. Instead, we design a more efficient spatiotemporal modeling operator by decoupling spatial modeling and temporal modeling to different heads:", + "bbox": [ + 212, + 393, + 782, + 468 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- For the spatial head, it still only needs to complete the spatial modeling task as the original image transformer, which reduces the difficulty of achieving image-to-video adaptation.", + "- For the temporal head, it actually implements the inter-frame attention mechanism with frames at different times. [74] have demonstrated the effectiveness of an inter-frame attention mechanism for modeling motion information, which is crucial for action recognition tasks. In addition, as shown in Table 1c, we can achieve both short-distance and long-distance modeling by controlling the $\\Delta t_{i}$ of the temporal head, which enables us to achieve enhanced temporal modeling capabilities." + ], + "bbox": [ + 225, + 479, + 784, + 631 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Comparison with other zero-cost operators There have been several previous attempts [6, 66, 75] to use image transformers to achieve efficient temporal modeling at zero parameters and zero computation. For example, [6] achieves approximations to full space-time attention by mixing tokens from adjacent frames. [75] performs temporal modeling by using channel shift on thecls tokens of different frames. [66] mixes information from adjacent frames using temporal patch shift and temporal channel shift before MHSA. However, these methods do not take advantage of the inherent characteristics of the transformer structure. By decoupling the learning of spatial and temporal information with head relocation, STDHA maintains the purity of key-value pair information within the same head, thereby achieving better spatial-temporal information learning than other zero-cost temporal modules. And STDHA simultaneously captures both short-range and long-range dependencies, rather than being limited to", + "bbox": [ + 212, + 643, + 787, + 840 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 127 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "adjacent frames. As shown in Table 1, these two key distinctions enable our STDHA to achieve superior spatial-temporal modeling.", + "bbox": [ + 212, + 146, + 782, + 176 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "3.3 Zero Extra Inference Cost image-to-video adaptation", + "text_level": 1, + "bbox": [ + 214, + 196, + 699, + 212 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Inspired by LoRA [22], we can fine-tune the model using a linear structure and then merge it with the original model during inference. However, to deal with the domain gap between images and videos, previous works [38,48,71] often use nonlinear structures to achieve stronger transfer capabilities. Therefore, we need to further consider how to achieve effective image-to-video transfer using only a linear structure.", + "bbox": [ + 212, + 218, + 782, + 306 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Layer merging via structural reparameterization Let $W_{\\mathrm{old}}$ represent the frozen weights of the original model, and $W_{\\mathrm{new}}$ represent the new trainable weights. Reviewing the structure of LoRA, it uses a low-rank decomposition matrix $W_{\\mathrm{LoRA}}$ parallel to the original weights:", + "bbox": [ + 212, + 309, + 782, + 369 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\nW _ {\\text {n e w}} = W _ {\\text {L o R A}} + W _ {\\text {o l d}} = W _ {\\text {u p}} W _ {\\text {d o w n}} + W _ {\\text {o l d}}. \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 341, + 378, + 785, + 393 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In this work, we use a serial linear structure called Linear Adapter to fine-tune the original parameters. As shown in Figure 2a, we use structural reparameterization to perform layer merging after training:", + "bbox": [ + 212, + 402, + 782, + 446 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\nW _ {\\text {n e w}} = W _ {\\text {A d a p t e r}} W _ {\\text {o l d}} = \\left(I + W _ {\\text {u p}} W _ {\\text {d o w n}}\\right) W _ {\\text {o l d}}, \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 334, + 455, + 785, + 472 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $I$ is the identity matrix, $W_{\\mathrm{up}} \\in \\mathbb{R}^{m \\times k}$ , $W_{\\mathrm{down}} \\in \\mathbb{R}^{k \\times n}$ , bottleneck width $k \\ll \\min(m, n)$ . As seen in Table 2, compared to parallel structures, serial structures can be more flexibly inserted into the network structure (e.g., for non-square matrices, under the same bottleneck dimension, using LoRA requires a larger number of parameters compared to Linear Adapter), which endows it with better transfer capabilities.", + "bbox": [ + 212, + 479, + 782, + 571 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Full adaptation with densely placed linear adapters By observing the structure of MHSA and MLP, we can see that all their trainable parameters concentrate on the linear projections at both ends of the structure. Therefore, fine-tuning the model essentially updates these linear projections. Previous works [48, 71] often selectively tune part of the parameters (e.g., placing only an adapter before MHSA) instead of tuning all parameters to avoid excessive additional computational and parameter costs, while we can achieve zero-cost full adaptation by tuning all parameters through wrapping MHSA and MLP with linear adapters. Table 2 shows that full adaptation enables us to achieve excellent image-to-video transfer performance with a linear structure, compensating for the performance degradation caused by the removal of nonlinearity.", + "bbox": [ + 212, + 571, + 784, + 738 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 214, + 758, + 375, + 776 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1 Experiments setup", + "text_level": 1, + "bbox": [ + 214, + 787, + 416, + 803 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We evaluate our method on five widely-used video recognition benchmarks: two large-scale datasets, namely Kinetics-400 (K400) [8] and Something-Something V2", + "bbox": [ + 212, + 809, + 782, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/77e78c5e5d8e6666a631fe257a0ad7666c3e5b508a0d0a8251683faa375a6786.jpg", + "table_caption": [ + "Table 1: Ablation study on STDHA. Most of the symbols in the table have been declared in the methodology section 3. (a) $R_{c}$ denotes channel change ratio, \"Shift\" refers to temporal channel shift, while \"HR\" denotes head relocation as used by STDHA. (b) We use a multiset to represent the time offsets of different heads (e.g., \"1·2\" means that there are 2 heads with $\\Delta t = 1$ ). When $\\Delta t = 0$ , it represents a spatial head. (c) \"Temporal RF\" refers to the temporal receptive field of a single STDHA." + ], + "table_footnote": [], + "table_body": "
RcMethodTop-1
1/6[cls] token shift61.4
Shift QKV64.5
Shift KV64.6
HR QKV64.8
HR KV (STDHA)66.0
1/4Shift KV64.0
HR KV (STDHA)65.8
", + "bbox": [ + 236, + 244, + 478, + 340 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/b1978b5d64621eb435eb7d57bd203031523c2e613fb4d5f2bfe92c08569c0a57.jpg", + "table_caption": [ + "(a) Compare temporal modeling methods" + ], + "table_footnote": [], + "table_body": "
BackboneΔt of headskTop-1
ViT-B (h=12){1·1/2, -1·1/2, 0·11}164.8
{1·1, -1·1, 0·10}266.0
{1·2, -1·2, 0·8}465.6
{1·3, -1·3, 0·6}665.6
ViT-L (h=16){1·1, -1·1, 0·14}267.7
{1·2, -1·2, 0·12}468.5
{1·3, -1·3, 0·10}668.3
", + "bbox": [ + 496, + 244, + 782, + 342 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/3c6f52f3ba26961259d3eac1a2cbbc7bf24371f589cbe1ee3dec6d738df4d1f2.jpg", + "table_caption": [ + "(b) Effect of the temporal head number" + ], + "table_footnote": [], + "table_body": "
FramesΔt of headsTemporal RFTop-1
8{1·1,0·11}264.7
{1·1,-1·1,0·10}366.0
{1·1,-1·1,2·1,0·9}465.5
{1·1,-1·1,2·1,-2·1,0·8}565.7
16{1·1,-1·1,0·10}367.2
{1·1,-1·1,2·1,0·9}467.3
{1·1,-1·1,2·1,-2·1,0·8}567.8
{1·1,-1·1,2·1,-2·1,3·1,0·7}667.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}767.3
32{1·1,-1·1,0·10}367.3
{1·1,-1·1,2·1,0·9}467.8
{1·1,-1·1,2·1,-2·1,0·8}568.5
{1·1,-1·1,2·1,-2·1,3·1,0·7}668.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}768.4
{1·1,-1·1,2·1,-2·1,3·1,-3·1,4·1,0·5}868.2
", + "bbox": [ + 277, + 367, + 730, + 539 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "(c) Effect of the temporal receptive field at different input lengths.", + "bbox": [ + 305, + 541, + 692, + 553 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "(SSv2) [16], in addition to three smaller-scale datasets, UCF101 [54], HMDB51 [25] and Diving48 [35]. We also evaluate our method on action detection dataset AVA [17]. This diverse dataset selection allows for a comprehensive evaluation of our model across various scales and domains. The specific model configuration and training strategy can be found in the supplementary. For most main experiments, we use ViT-B and ViT-L pre-trained by CLIP [51] as our backbone models.", + "bbox": [ + 212, + 594, + 787, + 686 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2 Ablation study", + "text_level": 1, + "bbox": [ + 214, + 708, + 385, + 724 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To validate the effectiveness of our method on image-to-video transfer and temporal modeling, we first conduct ablation experiments on the SSv2 dataset. All ablation experiments were performed using ViT-B/16 with 8 input frames unless specified.", + "bbox": [ + 212, + 733, + 787, + 792 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Effectiveness of STDHA Table 1a compares STDHA with other zero-cost temporal modeling methods. The [cls] token shift is implemented according to the original paper [75], with [cls] token shift performed before MHSA and MLP.", + "bbox": [ + 212, + 794, + 787, + 840 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/ed9ffc6cd3d3537e1a88bb302a8738368f80d3b014080819bd6563bbf0d5de0d.jpg", + "table_caption": [ + "Table 2: Comparison of adaption strategies. \"Width\" refers to the bottleneck width of LoRA/Adapter. \"Tunable Params\" refers to extra trainable parameters besides the parameters of the ViT backbone and linear classifier. \" $\\checkmark$ \" and \" $\\times$ \" indicate whether the corresponding weights have undergone fine-tuning, and \" $\\checkmark$ \" indicates that $W_{\\mathrm{attn}}^{Q}$ , $W_{\\mathrm{attn}}^{K}$ and $W_{\\mathrm{attn}}^{V}$ share the same adapter. \"Latency\" refers to inference latency with 3 samples. All results are obtained using the same V100-32G with PyTorch-built mixed precision." + ], + "table_footnote": [], + "table_body": "
MethodWeights of ViT blockTunable \nParams(M)Bottleneck \nWidthLatencySSv2 \n(ms)Top-1
WQattnWKattnWVattnWOattnWupmlpWdownmlp
Full Fine-tuning86-28.963.2
Linear ProbeXXXXXX0-28.920.0
Only tuning temporal headXX4.6-28.959.6
ST-Adapter [48]1419241.066.2
XX1438438.865.8
LoRA [22]XXXX719264.2
XX1419265.0
XX2519264.3
XX1712828.965.6
3219265.0
2112865.5
Adapter w/ GELU79637.365.6
XX719234.964.6
X1019236.366.1
1419238.466.1
Linear Adapter (Ours)79665.0
XX719264.4
X1019228.965.2
1419266.0
2019266.3
1412866.2
", + "bbox": [ + 220, + 243, + 782, + 522 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The temporal channel shift operation refers to TPS [66], which shifts a portion of the channels for each head. It can be seen that STDHA significantly outperforms other methods at the same channel change ratio, demonstrating the importance of preserving the purity of information within each head.", + "bbox": [ + 212, + 575, + 787, + 636 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Effect of the number of temporal heads and temporal receptive field We examined the influence of the number of temporal heads and the temporal receptive field in ViT-B and ViT-L. Our findings, detailed in Tables 1b and 1c, suggest that the optimal proportion of temporal heads in ViT lies between $1/6$ and $1/4$ . For the temporal receptive field, our results indicate that for 8-frame inputs, a field of 3 is sufficient, while for longer inputs (16/32 frames), performance improves with an increase in the field from 3, saturating at around 5 or 6. Hence, we employ different STDHA configurations based on input length.", + "bbox": [ + 212, + 647, + 787, + 768 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Comparison of adaptation strategies In Table 2, we compare the image-to-video transfer ability of our method with a diverse range of adaptation methods. For a fair comparison, we all use STDHA with the same setting to provide temporal modeling capabilities. From the results, we can observe that:", + "bbox": [ + 212, + 779, + 787, + 840 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/8be562ba9123b4ca638fee97108d89059fe13ea3143983144fd9a9d63d7dd1c7.jpg", + "table_caption": [ + "Table 3: Results on Kinetics-400 validation set. Views = #frames × #spatial crops × #temporal clips. \"GFLOPs\" means $10^{9}$ FLOPs, \"M\" means $10^{6}$ . \"Extra GLOPs\" refers to the extra computation added to the original ViT under the same number of views. \"New Params\" refers to additional parameters during inference besides the parameters of the original ViT backbone and linear classifier." + ], + "table_footnote": [], + "table_body": "
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
UniFormer-B [28]IN1K32×3×43108-50-83.095.4
TimeSformer-L [4]IN21K96×3×17140-121-80.794.7
VideoSwin-L [41]IN21K32×3×47248-197-83.195.9
MViTv2-L(↑312) [34]IN21K40×5×342420-218-86.197.0
ViViT-L/16x2 FE [1]JFT32×3×111940-311-83.594.3
MTV-L [70]JFT32×3×418050-876-84.396.3
ViT-B/16 [48]CLIP8×1×3422086081.095.5
ActionCLIP-B/16 [62]CLIP32×3×1016893131425683.897.1
X-CLIP ViT-L/14 [45]CLIP8×3×4789610742011687.197.6
Text4Vis ViT-L/14 [65]CLIP32×3×419944-3474387.197.4
Methods with PETL
VideoPrompt ViT-B/16 [24]CLIP16×5×1----76.993.5
ST-Adapter ViT-B/16 [48]IN21K8×1×34553393776.6-
ST-Adapter ViT-L/14 [48]CLIP32×1×382483221987.297.6
EVL ViT-B/16 [38]IN21K8×1×3454321152975.4-
EVL ViT-L/14 [38]CLIP8×1×32022763625886.3-
AIM ViT-B-14 [71]IN21K8×1×36242021001478.8-
AIM ViT-L/14 [71]CLIP32×1×31120834253413887.597.7
Zeroi2V ViT-B/16IN21K8×1×3422086078.6-
Zeroi2V ViT-B/16CLIP8×1×3422086083.095.8
Zeroi2V ViT-B/16CLIP16×1×3844086083.496.2
Zeroi2V ViT-B/16CLIP32×1×31688086083.796.4
Zeroi2V ViT-L/14CLIP8×1×319460304086.397.4
Zeroi2V ViT-L/14CLIP16×1×338920304086.897.6
Zeroi2V ViT-L/14CLIP32×1×377830304087.297.6
", + "bbox": [ + 246, + 229, + 754, + 503 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Even with minimal parameters being fine-tuned, our Linear Adapter significantly outperforms full fine-tuning (66.3 vs 63.2). Despite updating the fewest parameters, the linear probe performs poorly in image-to-video transfer.", + "- Tuning only the temporal head achieves about $95\\%$ of the full fine-tuning performance, suggesting that extensive fine-tuning of the spatial head may not be necessary to attain satisfactory transfer performance due to the decoupling of spatial and temporal modeling reduces the difficulty of adaptation.", + "- Our Full Adaptation strategy is not only effective for linear adapters, but also for non-linear adapters such as the ST-Adapter and GELU Adapter. It not only enhances their adaptation performance, but also eliminates the performance gap between linear and non-linear structures.", + "- Due to the inflexibility of the parallel structure, for non-square matrices like $W_{\\mathrm{mlp}}$ , LoRA requires more parameters under the same bottleneck width. It needs to decrease the bottleneck width of the low-rank matrix to align it with the number of parameters of the linear adapter. However, this reduction in bottleneck width can limit its adaptation ability, ultimately leading to results that are significantly worse than those of the Linear Adapter." + ], + "bbox": [ + 223, + 542, + 785, + 816 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/b38f6260f91c7334d45b491f32df097bef01ab6da72cfa0b0d6fd63ee3101062.jpg", + "table_caption": [ + "Table 4: Results on Something-Something v2 validation set. $\\dagger$ indicates that the model is pre-trained on both IN21K (except for Uniformer [28] which uses IN1K) and K400/K600. Other notations are the same as Table 3." + ], + "table_footnote": [], + "table_body": "
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
TimeFormer-L [4]IN21K64×3×17140-121-62.4-
ViViT-L [1]K400†16×3×411892-311-65.489.8
MTV-B(↑320) [70]K400†32×3×411160-310-68.590.4
VideoSwin-B [41]K400†32×3×1963-89-69.692.7
MViTv2-L(↑312) [34]K400†40×3×18484-213-73.394.1
UniFormer-B [28]K600†32×3×1777-50-71.292.8
ViT-L/14 [12]CLIP8×3×119460304048.777.5
ILA ViT-L/14 [58]CLIP8×3×410884310052922567.890.5
Methods with PETL
ST-Adapter ViT-B/16 [48]IN21K8×3×14553393762.8-
ST-Adapter ViT-B/16 [48]CLIP32×3×119552671001469.592.6
EVL ViT-L/14 [38]CLIP32×3×19641185847917566.7-
AIM ViT-B/16IN21K8×3×16242021001462.0-
AIM ViT-L/14 [71]CLIP32×3×11150837253545070.692.7
ZeroI2V ViT-B/16IN21K8×3×1422086065.3-
ZeroI2V ViT-B/16CLIP8×3×1422086067.790.8
ZeroI2V ViT-B/16CLIP16×3×1844086069.491.7
ZeroI2V ViT-B/16CLIP32×3×11688086070.192.4
ZeroI2V ViT-L/14CLIP8×3×119460304070.191.8
ZeroI2V ViT-L/14CLIP16×3×138920304071.493.0
ZeroI2V ViT-L/14CLIP32×3×177830304072.293.0
", + "bbox": [ + 245, + 203, + 754, + 467 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.3 Fully-supervised Experiments", + "text_level": 1, + "bbox": [ + 215, + 496, + 504, + 510 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Results on K400 As shown in Table 3, our method has significant advantages over traditional full fine-tuning methods, achieving better performance with much lower computational cost. For example, our ZeroI2V ViT-L/14 with an input of 8 frames outperforms MViTv2 [34] (86.3 vs 86.1), while requiring more than 20 times fewer GFLOPs (1946 vs 42420). Compared to multi-modal methods such as ActionCLIP [62] and X-CLIP [45], which require an additional text branch and fine-tune the entire model end-to-end, our ZeroI2V can achieve comparable performance using only the visual encoder. Moreover, although our proposed ZeroI2V doesn't increase computational or parameter costs during inference compared with the previous PETL method, it can still achieve similar or even better performance. For example, on ViT-B/16, ZeroI2V with an input of 8 frames can surpass ST-Adapter [48] with an input of 32 frames (83.0 vs 82.7) with much lower GFLOPs (422 vs 1821). On ViT-L/14, ZeroI2V achieves the same performance as EVL [38], which requires an additional 58M parameters. And ZeroI2V achieves comparable performance to AIM [71] (87.2 vs 87.5) with a nearly $30\\%$ reduction in GFLOPs (7783 vs 11208).", + "bbox": [ + 212, + 522, + 787, + 763 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Results on SSv2 As shown in Table 4, thanks to the effectiveness of STDHA in temporal modeling, our method outperforms most full fine-tuning methods, even though many of them have been pre-trained on the Kinetics dataset. Our ZeroI2V has a significant improvement compared to directly full fine-tuning ViT-L/16 pre-trained with CLIP (70.1 vs 48.7) with the same number of parameters", + "bbox": [ + 212, + 763, + 787, + 839 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 114, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/c0115593e5e7cd839f39d1bfe4e5150a96be4f94cf6eb2925dfc657b4b6cbdbb.jpg", + "table_caption": [ + "Table 5: Comparing the state-of-the-art video recognition methods on UCF101, HMDB51 and Diving48. For UCF101 and HMDB51, we test our method and report the 3-split mean Top-1 accuracy for both datasets following ST-Adapter [48]. And for Diving48, we test our method with 1 temporal clip following AIM [71]." + ], + "table_footnote": [], + "table_body": "
MethodPretrainUCF101HMDB51Diving48
Methods with full fine-tuning
I3D [8]ImageNet+K40095.674.8-
S3D [67]ImageNet+K40096.875.9-
SlowOnly-8x8-R101 [15]Kinetics+OmniSource97.379.0-
TimeSformer-L [4]IN21K--81.0
VideoSwin-B [41]IN21K--81.9
Methods with PETL
VideoPrompt [24]CLIP93.666.4-
AIM ViT-B/16 [71]CLIP--88.9
AIM ViT-L/14 [71]CLIP--90.6
ST-Adapter ViT-B/16 [48]CLIP+K40096.477.7-
ST-Adapter ViT-L/14 [48]CLIP+K40098.181.7-
ZeroI2V ViT-B/16CLIP95.673.789.7
ZeroI2V ViT-B/16CLIP+K40097.778.5-
ZeroI2V ViT-L/14CLIP97.879.991.4
ZeroI2V ViT-L/14CLIP+K40098.683.4-
", + "bbox": [ + 259, + 215, + 738, + 417 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/4f1624dc2080bbb2e5a7e54a7ae853fb940326ecffe9dc182889ae29ef409158.jpg", + "table_caption": [ + "Table 6: Comparing the SoTA action detection methods on AVA 2.2." + ], + "table_footnote": [], + "table_body": "
MethodPretrainFrozen BackboneFramesmAP
SlowFast-R101 [15]K400823.8
MViTv2-B [34]K4003228.1
VideoMAE-B [56]K4001631.8
VideoMAE-B [56]K400 wo/ labels1626.7
CLIP ViT-B/16CLIP818.3
ZeroI2V ViT-B/16CLIP826.4
", + "bbox": [ + 259, + 460, + 738, + 551 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "and computation. Compared to other PETL methods, ZeroI2V outperforms ST-Adapter [48] on ViT-B/16 (70.1 vs 69.5) with lower GFLOPs (1688 vs 1955). Additionally, ZeroI2V significantly surpasses both AVL [38] and AIM [71] (71.4 vs 66.7, 70.6) on ViT-L/14 with much lower GFLOPs (3892 vs 9641, 11508) and new parameters (0M vs 175M, 50M).", + "bbox": [ + 212, + 582, + 787, + 657 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Results on smaller datasets As shown in Table 5, on three relatively small datasets, our method achieves state-of-the-art performance on UCF101, HMDB51, and Diving48. This demonstrates a clear performance advantage over both full-finetuning methods and PETL methods previously.", + "bbox": [ + 212, + 657, + 787, + 718 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Results on action detection In addition to the task of action recognition, to understand the capability of our method in fine-grained spatial understanding, we also evaluate our method on action detection dataset AVA [17]. Following the setting of VideoMAE [56], we evaluate the top 60 common classes using the mean Average Precision (mAP) as the metric under an IoU threshold of 0.5. As shown in Table 6, compared to using the original image CLIP features, our ZeroI2V achieved a significant performance improvement (26.4 vs 18.3) with the same number of parameters and computation. It's noteworthy that our method was not", + "bbox": [ + 212, + 719, + 787, + 839 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/cb6b9c7fdfa9f43bbfef00e6968ea20beaeba50e93c0cfcb4e644d923731133e.jpg", + "table_caption": [ + "Table 7: Comparing the SoTA video recognition methods on the VidTAB [32]." + ], + "table_footnote": [], + "table_body": "
# Pretrain DataAvgActionScienceSafetyQualityEmotion
DS LVMS ABHC FFQAEA
CLIP ViT-L/14 [51]CLIP42.831.2 38.032.3 36.350.3 58.567.728.1
ViCLIP ViT-L/14 [64]CLIP+InternVid200M42.736.7 43.930.2 36.846.9 54.865.427.2
ST-Adapter ViT-L/14 [48]CLIP46.943.0 45.031.2 39.449.4 64.972.329.9
ZeroI2V ViT-L/14CLIP46.541.3 46.831.2 39.347.2 64.670.630.6
", + "bbox": [ + 246, + 186, + 754, + 257 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/258b6f3c75f632e0c55344aef2e8ccf78c606de2e9424c97386e8a808bb2aec6.jpg", + "table_caption": [ + "Table 8: Inference latency and throughput. All results are obtained using the same V100-32G with PyTorch-built mixed precision, using a batch size of 1 to measure latency and the optimal possible batch size to measure throughput before out of memory." + ], + "table_footnote": [], + "table_body": "
ModelViewsGFLOPsLatency (ms)Throughput (V/s)K400 (Top-1)SSv2 (Top-1)
Uniformer-B [28]32×41036245.384.2482.9-
EVL ViT-B/16 [38]8×345453.8724.0482.961.0
ViT-B/16 [12]8×342228.7240.0881.044.0
Zerol2V ViT-B/168×342228.8940.0883.067.7
", + "bbox": [ + 230, + 340, + 767, + 407 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "pre-trained on action recognition datasets such as Kinetics. Instead, we directly applied image-to-video transfer on the AVA dataset. Remarkably, our method still managed to achieve performance on par with full-finetuning methods and self-supervised methods that underwent pre-training using the Kinetics dataset, even when using only 8 frames as input. In summary, our ZeroI2V demonstrates outstanding potential in video tasks beyond recognition.", + "bbox": [ + 212, + 435, + 787, + 526 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4.4 Few-shot Experiments", + "text_level": 1, + "bbox": [ + 215, + 550, + 444, + 566 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "To demonstrate the adaptation capability of our method in few-shot scenarios, we conduct experiments on the Video Task Adaptation Benchmark (VidTAB). As show in Table 7 The results show that our method can effectively enhance the adaptation of the image model to video tasks using only a few samples. Compared to ST-Adapter [48], our approach achieves comparable results while enjoying the advantage of parameter and inference efficiency.", + "bbox": [ + 212, + 577, + 787, + 667 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4.5 Efficiency analysis", + "text_level": 1, + "bbox": [ + 215, + 691, + 413, + 708 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Comparison of inference efficiency We compared the inference efficiency of our method with other methods on the same hardware device. As shown in Table 8, under comparable accuracy, the throughput of our method is 10 times that of Uniformer [28], Compared to the original ViT-B, our method introduces negligible additional latency during inference while achieving superior performance. In comparison with AVL [38], it can also be seen that the impact of the additional computational module on the actual runtime latency (28.89 ms vs 53.87 ms) is greater than that reflected by GFLOPs (422 vs 454).", + "bbox": [ + 212, + 719, + 787, + 840 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/91509a209cb94e71abfa21d903f3999ff64988b1abcee7a3525056e5d6ef9794.jpg", + "table_caption": [ + "Table 9: Comparison of training cost. Our results are obtained using the same V100-32G with PyTorch-built mixed precision, following AVL [38]. \"†\" indicates that the epoch is estimated based on the batch size and training steps of the original paper. \"Memory\" refers to the GPU memory usage when the batch size is 8." + ], + "table_footnote": [], + "table_body": "
Model (Frames)DatasetTraining EpochsTraining GPU HoursTunable Param (M)Memory (G)Top-1
Uniformer-B [28] (32)K4001105000 × V10050-82.9
ActionCLIP ViT-B/16 [62] (16)K40050480 × RTX3090142-82.6
EVL ViT-B/16 [38] (8)K40053†60 × V100292.282.9
SSv246†75 × V100985.661.0
ST-Adapter ViT-B/16 [48] (8)K40011†23 × V10076.982.0
SSv238†60 × V100147.667.1
AIM ViT-B/16 [71] (8)K40030120 × V100118.783.9
SSv250150 × V100149.066.4
ZeroI2V ViT-B/16 (8)K40040100 × V100147.683.0
SSv25090 × V100147.667.3
", + "bbox": [ + 230, + 215, + 767, + 356 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Comparison of training cost We compared the training cost of our method with previous methods in Table 9. It can be seen that compared to previous full fine-tuning methods such as Uniformer [28] and ActionCLIP [62], our method significantly reduces training cost. Compared to the previous PETL method, our method does not have a significant advantage in training efficiency due to the use of dense adapters. AVL [38], which does not need to insert adapters into the frozen backbone, avoids some of the cost of backpropagation and therefore has lower memory usage. ST-Adapter [48], due to its fewer trainable parameters, has a faster convergence speed, but its memory usage is close to our method. Nonetheless, in contrast to AIM [71] that imposes an additional computational burden for temporal modeling, our STDHA method, which does not introduce extra learnable parameters, ensures that ZeroI2V maintains superior training efficiency. We believe that it is worthwhile and acceptable to exchange some training costs for a reduction in inference costs.", + "bbox": [ + 212, + 381, + 787, + 593 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusions", + "text_level": 1, + "bbox": [ + 215, + 614, + 370, + 630 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In this work, we present a new approach for parameter-efficient image-to-video transfer learning, called ZeroI2V. By fully leveraging the powerful representational capabilities of pre-trained image models, our approach enables image transformers to perform video tasks without introducing extra costs during inferences. Our proposed STDHA achieves efficient spatial-temporal modeling at zero extra computation and parameters. In addition, through structural reparameterization and full adaptation strategies, we successfully use a linear structure to achieve zero extra inference cost image-to-video adaptation for the first time. ZeroI2V shows strong performance compared to previous full fine-tuning and PETL methods on widely used video understanding benchmarks while maintaining parameter and inference efficiency. Due to the simplicity and versatility of our method, we believe it can be easily extended to other video tasks and even multi-modal understanding tasks. We will further investigate this direction in future work.", + "bbox": [ + 212, + 643, + 787, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 127 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgements. This work is supported by the National Key R&D Program of China (No. 2022ZD0160900), the National Natural Science Foundation of China (No. 62076119, No. 61921006), the Fundamental Research Funds for the Central Universities (No. 020214380119), and the Collaborative Innovation Center of Novel Software Technology and Industrialization.", + "bbox": [ + 212, + 146, + 787, + 220 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 243, + 321, + 258 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lucic, M., Schmid, C.: Vivit: A video vision transformer. In: Int. Conf. Comput. Vis. pp. 6816-6826 (2021)", + "2. Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)", + "3. Bao, H., Dong, L., Piao, S., Wei, F.: Beit: BERT pre-training of image transformers. In: Int. Conf. Learn. Represent. (2022)", + "4. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? In: Int. Conf. Mach. Learn. vol. 139, pp. 813-824 (2021)", + "5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: Adv. Neural Inform. Process. Syst. vol. 33, pp. 1877-1901 (2020)", + "6. Bulat, A., Pérez-Rúa, J., Sudhakaran, S., Martínez, B., Tzimiropoulos, G.: Spacetime mixing attention for video transformer. In: Adv. Neural Inform. Process. Syst. pp. 19594-19607 (2021)", + "7. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: Int. Conf. Comput. Vis. pp. 9630-9640 (2021)", + "8. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4724-4733 (2017)", + "9. Chen, S., Ge, C., Tong, Z., Wang, J., Song, Y., Wang, J., Luo, P.: Adaptformer: Adapting vision transformers for scalable visual recognition. In: Adv. Neural Inform. Process. Syst. (2022)", + "0. Cherti, M., Beaumont, R., Wightman, R., Wortsman, M., Ilharco, G., Gordon, C., Schuhmann, C., Schmidt, L., Jitsev, J.: Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143 (2022)", + "1. Devlin, J., Chang, M., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT. pp. 4171-4186 (2019)", + "2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Int. Conf. Learn. Represent. (2021)", + "3. Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscale vision transformers. In: Int. Conf. Comput. Vis. pp. 6804-6815 (2021)", + "4. Feichtenhofer, C.: X3D: expanding architectures for efficient video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 200-210 (2020)", + "5. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Int. Conf. Comput. Vis. pp. 6201-6210 (2019)", + "6. Goyal, R., Kahou, S.E., Michalski, V., Materzynska, J., Westphal, S., Kim, H., Haenel, V., Fründ, I., Yianilos, P., Mueller-Freitag, M., Hoppe, F., Thurau, C., Bax, I., Memisevic, R.: The \"something something\" video database for learning" + ], + "bbox": [ + 225, + 273, + 785, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and evaluating visual common sense. In: Int. Conf. Comput. Vis. pp. 5843-5851. IEEE Computer Society (2017)", + "17. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 6047-6056 (2018)", + "18. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.B.: Masked autoencoders are scalable vision learners. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 15979-15988 (2022)", + "19. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 9726-9735 (2020)", + "20. He, X., Li, C., Zhang, P., Yang, J., Wang, X.E.: Parameter-efficient model adaptation for vision transformers. arXiv preprint arXiv:2203.16329 (2022)", + "21. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., de Laroussilhe, Q., Gesmundo, A., Attariyan, M., Gelly, S.: Parameter-efficient transfer learning for NLP. In: Int. Conf. Mach. Learn. vol. 97, pp. 2790-2799 (2019)", + "22. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: Lora: Low-rank adaptation of large language models. In: Int. Conf. Learn. Represent. (2022)", + "23. Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: Eur. Conf. Comput. Vis. pp. 709-727 (2022)", + "24. Ju, C., Han, T., Zheng, K., Zhang, Y., Xie, W.: Prompting visual-language models for efficient video understanding. In: Eur. Conf. Comput. Vis. pp. 105-124. Springer (2022)", + "25. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: a large video database for human motion recognition. In: Int. Conf. Comput. Vis. pp. 2556-2563. IEEE (2011)", + "26. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045-3059 (2021)", + "27. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Int. Conf. Mach. Learn. vol. 162, pp. 12888-12900 (2022)", + "28. Li, K., Wang, Y., Gao, P., Song, G., Liu, Y., Li, H., Qiao, Y.: Uniformer: Unified transformer for efficient spatial-temporal representation learning. In: Int. Conf. Learn. Represent. (2022)", + "29. Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Wang, L., Qiao, Y.: Uniformerv2: Unlocking the potential of image vits for video understanding. In: Int. Conf. Comput. Vis. pp. 1632-1643 (2023)", + "30. Li, T., Wang, L.: Learning spatiotemporal features via video and text pair discrimination. arXiv preprint arXiv:2001.05691 (2020)", + "31. Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582-4597 (2021)", + "32. Li, X., Huang, Z., Wang, J., Li, K., Wang, L.: Videoeval: Comprehensive benchmark suite for low-cost evaluation of video foundation model. arXiv preprint arXiv:2407.06491 (2024)" + ], + "bbox": [ + 215, + 147, + 787, + 839 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "33. Li, Y., Ji, B., Shi, X., Zhang, J., Kang, B., Wang, L.: TEA: temporal excitation and aggregation for action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 906-915 (2020)", + "34. Li, Y., Wu, C., Fan, H., Mangalam, K., Xiong, B., Malik, J., Feichtenhofer, C.: Mvitv2: Improved multiscale vision transformers for classification and detection. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4794-4804 (2022)", + "35. Li, Y., Li, Y., Vasconcelos, N.: Resound: Towards action recognition without representation bias. In: Eur. Conf. Comput. Vis. pp. 513-528 (2018)", + "36. Lian, D., Zhou, D., Feng, J., Wang, X.: Scaling & shifting your features: A new baseline for efficient model tuning. In: Adv. Neural Inform. Process. Syst. (2022)", + "37. Lin, J., Gan, C., Wang, K., Han, S.: TSM: temporal shift module for efficient and scalable video understanding on edge devices. IEEE Trans. Pattern Anal. Mach. Intell. 44(5), 2760-2774 (2022)", + "38. Lin, Z., Geng, S., Zhang, R., Gao, P., de Melo, G., Wang, X., Dai, J., Qiao, Y., Li, H.: Frozen CLIP models are efficient video learners. In: Eur. Conf. Comput. Vis. vol. 13695, pp. 388-404 (2022)", + "39. Liu, M., Wang, Z., Ji, S.: Non-local graph neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 10270-10276 (2022)", + "40. Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., Guo, B.: Swin transformer V2: scaling up capacity and resolution. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 11999-12009 (2022)", + "41. Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., Hu, H.: Video swim transformer. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3192-3201 (2022)", + "42. Liu, Z., Wang, L., Wu, W., Qian, C., Lu, T.: TAM: temporal adaptive module for video recognition. In: Int. Conf. Comput. Vis. pp. 13688-13698 (2021)", + "43. Lu, C., Jin, X., Huang, Z., Hou, Q., Cheng, M., Feng, J.: CMAE-V: contrastive masked autoencoders for video action recognition. arXiv preprint arXiv:2301.06018 (2023)", + "44. Michel, P., Levy, O., Neubig, G.: Are sixteen heads really better than one? In: Adv. Neural Inform. Process. Syst. pp. 14014-14024 (2019)", + "45. Ni, B., Peng, H., Chen, M., Zhang, S., Meng, G., Fu, J., Xiang, S., Ling, H.: Expanding language-image pretrained models for general video recognition. In: Eur. Conf. Comput. Vis. vol. 13664, pp. 1-18 (2022)", + "46. Nie, X., Ni, B., Chang, J., Meng, G., Huo, C., Zhang, Z., Xiang, S., Tian, Q., Pan, C.: Pro-tuning: Unified prompt tuning for vision tasks. arXiv preprint arXiv:2207.14381 (2022)", + "47. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., Assran, M., Ballas, N., Galuba, W., Howes, R., Huang, P., Li, S., Misra, I., Rabbat, M.G., Sharma, V., Synnaeve, G., Xu, H., Jégou, H., Mairal, J., Labatut, P., Joulin, A., Bojanowski, P.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)", + "48. Pan, J., Lin, Z., Zhu, X., Shao, J., Li, H.: St-adapter: Parameter-efficient image-to-video transfer learning. In: Adv. Neural Inform. Process. Syst. (2022)", + "49. Pfeiffer, J., Kamath, A., Rückle, A., Cho, K., Gurevych, I.: Adapterfusion: Nondestructive task composition for transfer learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 487-503 (2021)", + "50. Pfeiffer, J., Rückle, A., Poth, C., Kamath, A., Vulic, I., Ruder, S., Cho, K., Gurevych, I.: Adapterhub: A framework for adapting transformers. In: Proceedings of the" + ], + "bbox": [ + 212, + 146, + 787, + 839 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 46-54 (2020)", + "51. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Int. Conf. Mach. Learn. vol. 139, pp. 8748-8763 (2021)", + "52. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI blog (2018)", + "53. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)", + "54. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)", + "55. Tan, J., Zhao, X., Shi, X., Kang, B., Wang, L.: Pointtad: Multi-label temporal action detection with learnable query points. NIPS 35, 15268-15280 (2022)", + "56. Tong, Z., Song, Y., Wang, J., Wang, L.: Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In: Adv. Neural Inform. Process. Syst. (2022)", + "57. Tschannen, M., Mustafa, B., Houlsby, N.: Clippo: Image-and-language understanding from pixels only. arXiv preprint arXiv:2212.08045 (2022)", + "58. Tu, S., Dai, Q., Wu, Z., Cheng, Z., Hu, H., Jiang, Y.: Implicit temporal modeling with learnable alignment for video recognition. In: Int. Conf. Comput. Vis. (2023)", + "59. Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., Qiao, Y.: Videomae V2: scaling video masked autoencoders with dual masking. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023)", + "60. Wang, L., Tong, Z., Ji, B., Wu, G.: TDN: temporal difference networks for efficient action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1895-1904 (2021)", + "61. Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Gool, L.V.: Temporal segment networks: Towards good practices for deep action recognition. In: Eur. Conf. Comput. Vis. vol. 9912, pp. 20-36 (2016)", + "62. Wang, M., Xing, J., Liu, Y.: Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021)", + "63. Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Jiang, Y., Zhou, L., Yuan, L.: BEVT: BERT pretraining of video transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 14713-14723 (2022)", + "64. Wang, Y., He, Y., Li, Y., Li, K., Yu, J., Ma, X., Li, X., Chen, G., Chen, X., Wang, Y., et al.: Intervid: A large-scale video-text dataset for multimodal understanding and generation. In: ICLR (2024)", + "65. Wu, W., Sun, Z., Ouyang, W.: Revisiting classifier: Transferring vision-language models for video recognition. In: AAAI Conf. Artif. Intell. pp. 2847-2855 (2023)", + "66. Xiang, W., Li, C., Wang, B., Wei, X., Hua, X., Zhang, L.: Spatiotemporal self-attention modeling with temporal patch shift for action recognition. In: Eur. Conf. Comput. Vis. vol. 13663, pp. 627-644 (2022)", + "67. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Eur. Conf. Comput. Vis. pp. 305–321 (2018)", + "68. Xu, C., Zhu, Y., Shen, H., Chen, B., Liao, Y., Chen, X., Wang, L.: Progressive visual prompt learning with contrastive feature re-formation. arXiv preprint arXiv:2304.08386 (2023)" + ], + "bbox": [ + 215, + 146, + 785, + 839 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "X. Li et al.", + "bbox": [ + 271, + 114, + 346, + 126 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "69. Xu, C., Zhu, Y., Zhang, G., Shen, H., Liao, Y., Chen, X., Wu, G., Wang, L.: Dpl: Decoupled prompt learning for vision-language models. arXiv preprint arXiv:2308.10061 (2023)", + "70. Yan, S., Xiong, X., Arnab, A., Lu, Z., Zhang, M., Sun, C., Schmid, C.: Multiview transformers for video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3323-3333 (2022)", + "71. Yang, T., Zhu, Y., Xie, Y., Zhang, A., Chen, C., Li, M.: Aim: Adapting image models for efficient video action recognition. In: Int. Conf. Learn. Represent. (2023)", + "72. Zaken, E.B., Goldberg, Y., Ravfogel, S.: Bitfit: Simple parameter-efficient fin-tuning for transformer-based masked language-models. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 1-9 (2022)", + "73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1204-1213 (2022)", + "74. Zhang, G., Zhu, Y., Wang, H., Chen, Y., Wu, G., Wang, L.: Extracting motion and appearance via inter-frame attention for efficient video frame interpolation. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023)", + "75. Zhang, H., Hao, Y., Ngo, C.: Token shift transformer for video classification. In: ACM Int. Conf. Multimedia. pp. 917-925 (2021)", + "76. Zhang, Y., Zhou, K., Liu, Z.: Neural prompt search. arXiv preprint arXiv:2206.04673 (2022)", + "77. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Eur. Conf. Comput. Vis. vol. 11205, pp. 831-846 (2018)", + "78. Zhu, Y., Ji, Y., Zhao, Z., Wu, G., Wang, L.: Awt: Transferring vision-language models via augmentation, weighting, and transportation. arXiv preprint arXiv:2407.04603 (2024)", + "79. Zhu, Y., Zhang, G., Tan, J., Wu, G., Wang, L.: Dual detrs for multi-label temporal action detection. In: CVPR. pp. 18559-18569 (2024)" + ], + "bbox": [ + 212, + 146, + 787, + 535 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "ZeroI2V", + "bbox": [ + 674, + 114, + 730, + 126 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 18 + } +] \ No newline at end of file diff --git a/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_model.json b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e82c272f263770e23bf20a4b40329f524ebaebe4 --- /dev/null +++ b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_model.json @@ -0,0 +1,2680 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.239, + 0.141, + 0.768, + 0.187 + ], + "angle": 0, + "content": "ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.212, + 0.692, + 0.229 + ], + "angle": 0, + "content": "Xinhao Li\\(^{1,2}\\), Yuhan Zhu\\(^{1}\\), and Limin Wang\\(^{1,2*}\\)" + }, + { + "type": "text", + "bbox": [ + 0.249, + 0.239, + 0.756, + 0.254 + ], + "angle": 0, + "content": "1 State Key Laboratory for Novel Software Technology, Nanjing University" + }, + { + "type": "text", + "bbox": [ + 0.412, + 0.254, + 0.593, + 0.269 + ], + "angle": 0, + "content": "2 Shanghai AI Laboratory" + }, + { + "type": "text", + "bbox": [ + 0.249, + 0.27, + 0.754, + 0.283 + ], + "angle": 0, + "content": "xinhaoli00@outlook.com zyuhan0812@gmail.com lmwang@nju.edu.cn" + }, + { + "type": "text", + "bbox": [ + 0.368, + 0.283, + 0.636, + 0.296 + ], + "angle": 0, + "content": "https://github.com/MCG-NJU/ZeroI2V" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.336, + 0.744, + 0.683 + ], + "angle": 0, + "content": "Abstract. Adapting image models to the video domain has emerged as an efficient paradigm for solving video recognition tasks. Due to the huge number of parameters and effective transferability of image models, performing full fine-tuning is less efficient and even unnecessary. Thus, recent research is shifting its focus toward parameter-efficient image-to-video adaptation. However, these adaptation strategies inevitably introduce extra computational costs to deal with the domain gap and temporal modeling in videos. In this paper, we present a new adaptation paradigm (ZeroI2V) to transfer the image transformers to video recognition tasks (i.e., introduce zero extra cost to the original models during inference). To achieve this goal, we present two core designs. First, to capture the dynamics in videos and reduce the difficulty of image-to-video adaptation, we exploit the flexibility of self-attention and introduce spatial-temporal dual-headed attention (STDHA). This approach efficiently endows the image transformers with temporal modeling capability at zero extra parameters and computation. Second, to handle the domain gap between images and videos, we propose a linear adaption strategy that utilizes lightweight densely placed linear adapters to fully transfer the frozen image models to video recognition. Thanks to the customized linear design, all newly added adapters could be easily merged with the original modules through structural reparameterization after training, enabling zero extra cost during inference. Extensive experiments on representative fully-supervised and few-shot video recognition benchmarks showcase that ZeroI2V can match or even outperform previous state-of-the-art methods while enjoying superior parameter and inference efficiency." + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.696, + 0.738, + 0.71 + ], + "angle": 0, + "content": "Keywords: Video understanding \\(\\cdot\\) Image-to-video adaptation \\(\\cdot\\) PEFT" + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.738, + 0.378, + 0.753 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.77, + 0.789, + 0.817 + ], + "angle": 0, + "content": "Adapting pre-trained foundation models such as BERT [11] and GPT [5, 52, 53] through efficient strategies has yielded excellent performance on downstream tasks in natural language understanding. This new paradigm is becoming popular in" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.825, + 0.387, + 0.841 + ], + "angle": 0, + "content": "* Corresponding author." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "image", + "bbox": [ + 0.245, + 0.147, + 0.499, + 0.321 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.149, + 0.772, + 0.321 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.335, + 0.788, + 0.405 + ], + "angle": 0, + "content": "Fig. 1: Left: Our proposed image-to-video transfer learning method. Right: Comparison of PETL methods on SSv2 validation set. For a more intuitive comparison, the views of the methods in the figure are all \\(8 \\times 3 \\times 1\\). Two core techniques enable us to achieve superior performance on video tasks without introducing additional computation and parameters during inference." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.441, + 0.788, + 0.548 + ], + "angle": 0, + "content": "computer vision due to the available pre-trained image models such as CLIP [51] and DINO [7, 47]. These models could be easily adapted to downstream tasks through linear probes, fine-tuning, or even zero-shot recognition, exhibiting robustness and strong transfer capabilities similar to those of large-scale language models. Recently, parameter-efficient transfer learning (PETL) [9,23,38,46,48,78] is becoming an efficient paradigm to adapt these large pre-trained models due to their huge numbers of parameters and high computational cost of full fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.55, + 0.789, + 0.747 + ], + "angle": 0, + "content": "For video understanding, there exist several large pre-trained video models [56, 59] from self-supervised learning, but these models are of high computational complexity due to the joint spatiotemporal attentions. Therefore, adapting pretrained image models to the video domain through efficient strategies is still a practical solution to video recognition. In fact, the state-of-the-art video networks have long relied on the pre-trained image models by inflating the kernels [1,8,39,41] or inserting plug-and-play temporal modules [33,37,42,60,61]. However, most of these methods necessitate full fine-tuning, which involves updating all the model parameters during training on video datasets. As the scale of pre-trained models increases, full fine-tuning becomes impractical due to the high training costs and the risk of overfitting or even catastrophic forgetting when the downstream data is limited. In addition, these methods often inevitably introduce extra costs to the adapted video models due to these newly added modules." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.789, + 0.841 + ], + "angle": 0, + "content": "In this paper, we aim to present a new efficient paradigm of adapting image transformers to video downstream tasks with two main objectives. First, inspired by the PETL methods in NLP [21,22,26,31] and image understanding [9,23,46], we aim to devise a parameter-efficient transfer technique from image to video, which can effectively reduce the risk of over-fitting and greatly improve the training efficiency. Second, to overcome the issue of high computation in the adapted" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.207 + ], + "angle": 0, + "content": "video models, we try to present a new adaptation method without introducing any extra computations to the final video models during inference. This zero extra inference cost adaptation would allow for more efficient deployment of transferred video models in real applications." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.208, + 0.79, + 0.373 + ], + "angle": 0, + "content": "To achieve the above two objectives, we propose a novel transfer learning method (as shown in Figure 1) that can utilize the off-the-shelf pre-trained image transformers to achieve excellent performance on video tasks without additional parameters and computation during inference. To be specific, for the temporal modeling required for video tasks, we transform multi-head self-attention into spatio-temporal dual-head attention (STDHA) by reassigning some heads to achieve temporal modeling at zero computation and zero parameters. For image-to-video transfer, we explore the strategy of using linear adapters to fully adapt the parameters of each part of the model and merge them with the frozen original parameters through structural reparameterization after training, thus achieving zero extra cost during inference." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.375, + 0.79, + 0.588 + ], + "angle": 0, + "content": "To summarize, we make the following contributions: 1) We propose a new approach for parameter-efficient image-to-video transfer learning that can achieve the efficient adaptation of transformers from image to video without introducing additional computation and parameters during inference. 2) We introduce a novel attention mechanism named Spatial-Temporal Dual-Headed Attention (STDHA), which utilizes the flexibility of self-attention to achieve temporal modeling without introducing extra computation and parameters. 3) To the best of our knowledge, we are the first to investigate the achievement of zero extra inference cost image-to-video adaptation through the utilization of a linear structure. We establish an empirical study by conducting extensive experiments with a diverse range of adaptation strategies. 4) Our method achieves comparable or even better performance than state-of-the-art methods on popular fully-supervised and few-shot video recognition benchmarks while enjoying the advantage of parameter and inference efficiency." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.61, + 0.383, + 0.626 + ], + "angle": 0, + "content": "2 Related work" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.643, + 0.788, + 0.763 + ], + "angle": 0, + "content": "Pre-trained image transformers The powerful scalability of ViT [12] brings more possibilities to the pre-trained image model. In addition to the traditional supervised approach [12,40,73], recent works [3,7,18,19,47] utilize self-supervised learning to effectively learn representations from unlabeled data. Moreover, several works [10,27,51,57] adopt large-scale multi-modal data (e.g., text-image pairs) to learn visual representations with great transferability. Our proposed adaptation strategy can leverage these off-the-shelf pre-trained image transformers to achieve outstanding performance on video tasks." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Video action recognition is crucial for downstream tasks [55, 79]. Traditionally, state-of-the-art methods have long relied on image models. Previous works for action recognition can be classified into two categories: one is to extend the image model for spatial-temporal modeling by inflating weights and structures [8, 13-15, 28, 34, 41], while the other is to directly utilize the image model as the" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.268 + ], + "angle": 0, + "content": "backbone and insert plug-and-play modules for temporal modeling [37, 42, 60, 61, 77]. Following the success of new training paradigms in image understanding, several works have attempted to learn transferable video representations via self-supervised learning [43, 56, 59, 63] or multi-modal video-text pre-training [29, 30, 45, 62]. However, the above methods usually require full fine-tuning of the entire model or training from scratch, resulting in high training costs and additional computational overhead. In this work, we avoid the above problems by adapting the pre-trained image transformers to video tasks in an efficient manner." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.269, + 0.789, + 0.542 + ], + "angle": 0, + "content": "Parameter-efficient transfer learning To address the issue of training inefficiency caused by the continuous growth of model size, Parameter-efficient transfer learning (PETL) is initially introduced in NLP [21, 22, 26, 31, 49, 50, 72] and subsequently applied to vision tasks [9, 20, 23, 36, 46, 68, 69, 78]. These techniques aim to achieve comparable or even superior performance on other tasks by fine-tuning only a small subset of trainable parameters. Most PETL methods [9, 20, 23, 36, 46, 76, 78] in vision domain are limited to transfer within the same modality (e.g., image-to-image or video-to-video). In contrast, our research focuses on image-to-video transfer learning. Despite progress made by recent studies [38, 48, 71], these methods require additional computation and parameters for temporal modeling of video tasks and image-to-video adaptation. For example, AVL [38] incorporates an additional temporal transformer decoder, while ST-Adapter [48] introduces additional adapters with depth-wise 3D convolution layers. Similarly, AIM [71] adds extra adapters and necessitates an additional time attention calculation at each block. In contrast to previous works, our proposed method eschews the introduction of additional computation or parameters during inference, yet still achieves comparable or superior performance compared to previous methods." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.566, + 0.381, + 0.584 + ], + "angle": 0, + "content": "3 Methodology" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.6, + 0.788, + 0.677 + ], + "angle": 0, + "content": "In this section, we first briefly revisit the basic block of ViT (Sec. 3.1), and then discuss how to utilize the flexibility of self-attention to achieve temporal modeling without introducing additional computation and parameters (Sec. 3.2). Finally, we explain how we implement zero-cost image-to-video adaptation with a serial linear structure (Sec. 3.3)." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.701, + 0.363, + 0.716 + ], + "angle": 0, + "content": "3.1 Preliminary" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.728, + 0.789, + 0.773 + ], + "angle": 0, + "content": "The original ViT [12] block consists of two network layers: multi-head self-attention (MHSA) and multi-layer perceptron (MLP). As shown in Figure 1, a ViT block consists of MHSA and MLP connected in series in a residual structure:" + }, + { + "type": "equation", + "bbox": [ + 0.419, + 0.803, + 0.786, + 0.818 + ], + "angle": 0, + "content": "\\[\nz _ {l} = x _ {l} + \\operatorname {M H S A} (\\ln (x _ {l})), \\tag {1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.4, + 0.821, + 0.787, + 0.837 + ], + "angle": 0, + "content": "\\[\nx _ {l + 1} = z _ {l} + \\operatorname {M L P} (\\ln (z _ {l})), \\tag {2}\n\\]" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.225, + 0.18, + 0.484, + 0.298 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.226, + 0.302, + 0.501, + 0.317 + ], + "angle": 0, + "content": "(a) Layer merging via reparameterization" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.148, + 0.756, + 0.301 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.302, + 0.803, + 0.316 + ], + "angle": 0, + "content": "(b) Spatial-temporal dual-headed attention" + }, + { + "type": "image_caption", + "bbox": [ + 0.247, + 0.323, + 0.753, + 0.337 + ], + "angle": 0, + "content": "Fig. 2: Illustration of the proposed linear adaptation and STDHA." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.367, + 0.787, + 0.412 + ], + "angle": 0, + "content": "where LN denotes layer normalization [2] and \\( x_{l} \\) represents the input to the \\( l \\)-th ViT block. We review their specific implementation details. For the sake of simplicity, we ignore the bias and denote \\( X \\in \\mathbb{R}^{n \\times d} \\) as input of MHSA and MLP." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.412, + 0.787, + 0.502 + ], + "angle": 0, + "content": "MHSA first performs three different linear projections \\( W_{\\mathrm{attn}}^{Q}, W_{\\mathrm{attn}}^{K}, W_{\\mathrm{attn}}^{V} \\in \\mathbb{R}^{d \\times d} \\) on the input \\( X \\) to obtain the query \\( Q \\) and key-value pairs \\( K, V \\). These are then evenly divided into \\( h \\) heads by channel. Each head independently performs the scaled dot-product attention calculation. Finally, the heads are concatenated by channel and then a linear projection \\( W_{\\mathrm{attn}}^{O} \\in \\mathbb{R}^{d \\times d} \\) is performed to obtain the final calculation result:" + }, + { + "type": "equation", + "bbox": [ + 0.355, + 0.513, + 0.787, + 0.532 + ], + "angle": 0, + "content": "\\[\nQ, K, V = X W _ {\\mathrm {a t t n}} ^ {Q}, X W _ {\\mathrm {a t t n}} ^ {K}, X W _ {\\mathrm {a t t n}} ^ {V}, \\tag {3}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.372, + 0.535, + 0.787, + 0.551 + ], + "angle": 0, + "content": "\\[\n\\operatorname {h e a d} _ {i} = \\operatorname {A t t e n t i o n} \\left(Q _ {i}, K _ {i}, V _ {i}\\right), \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.335, + 0.554, + 0.787, + 0.573 + ], + "angle": 0, + "content": "\\[\n\\operatorname {M H S A} (X) = \\operatorname {C o n c a t} \\left(\\operatorname {h e a d} _ {1}, \\dots , \\operatorname {h e a d} _ {h}\\right) W _ {\\mathrm {a t t n}} ^ {O}. \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.584, + 0.787, + 0.615 + ], + "angle": 0, + "content": "MLP involves two linear projections \\( W_{\\mathrm{mlp}}^{\\mathrm{up}} \\in \\mathbb{R}^{d \\times d'} \\), \\( W_{\\mathrm{mlp}}^{\\mathrm{down}} \\in \\mathbb{R}^{d' \\times d} \\), \\( d' > d \\) and one non-linear activation function \\( \\sigma \\):" + }, + { + "type": "equation", + "bbox": [ + 0.391, + 0.627, + 0.787, + 0.647 + ], + "angle": 0, + "content": "\\[\n\\operatorname {M L P} (X) = \\sigma \\left(X W _ {\\mathrm {m l p}} ^ {\\mathrm {u p}}\\right) W _ {\\mathrm {m l p}} ^ {\\mathrm {d o w n}}. \\tag {6}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.679, + 0.506, + 0.695 + ], + "angle": 0, + "content": "3.2 Zero-Cost temporal modeling" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Applying image models to video tasks often requires the incorporation of additional modules for temporal modeling, which not only introduces additional parameters and computation, but also results in additional training costs. In this work, we address temporal modeling from three key perspectives: (1) Capability of capturing the temporal dynamics. (2) Reducing the difficulty of image-to-video adaptation. (3) Minimizing the introduction of additional computation and parameters compared to the original model. [44] suggests that most heads are redundant given the rest of the model. Inspired by this, we attempt to reassign some heads as temporal heads in the multi-head attention to perform temporal" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.128 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.177 + ], + "angle": 0, + "content": "modeling tasks, while the remaining heads continue to perform spatial modeling tasks as spatial heads, thereby achieving efficient spatial-temporal modeling." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.178, + 0.788, + 0.314 + ], + "angle": 0, + "content": "Spatial-temporal dual-headed attention (STDHA) As shown in Figure 2b, consider an input sequence \\( X = \\{x_{1}, x_{2}, \\dots, x_{T}\\} \\) where \\( x_{t} \\in \\mathbb{R}^{n \\times d} \\). Let the query and key-value pairs obtained after the linear projection of the \\( x_{t} \\) be \\( Q^{t}, K^{t}, V^{t} \\in \\mathbb{R}^{n \\times d} \\). We divide the \\( h \\) heads of the MHSA into two groups of size \\( h - k \\) and \\( k \\). One group of heads queries the key-value pairs at the current time \\( t \\) to perform spatial modeling, while the other group of heads queries the key-value pairs at other times \\( t + \\Delta t_{i} \\) to perform temporal modeling. Finally, the information from the two groups of heads is aggregated by a linear projection to perform spatial-temporal modeling:" + }, + { + "type": "equation", + "bbox": [ + 0.258, + 0.324, + 0.788, + 0.342 + ], + "angle": 0, + "content": "\\[\n\\text {S - h e a d} _ {i} = \\text {A t t e n t i o n} \\left(Q _ {i} ^ {t}, K _ {i} ^ {t}, V _ {i} ^ {t}\\right), \\tag {7}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.255, + 0.344, + 0.787, + 0.363 + ], + "angle": 0, + "content": "\\[\n\\text {T - h e a d} _ {i} = \\operatorname {A t t e n t i o n} \\left(Q _ {i} ^ {t}, K _ {i} ^ {t + \\Delta t _ {i}}, V _ {i} ^ {t + \\Delta t _ {i}}\\right) (\\Delta t _ {i} \\neq 0), \\tag {8}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.225, + 0.365, + 0.787, + 0.384 + ], + "angle": 0, + "content": "\\[\n\\operatorname {S T D H A} (X) = \\operatorname {C o n c a t} (\\mathrm {T} - \\text {h e a d} _ {1}, \\dots , \\mathrm {T} - \\text {h e a d} _ {k}, \\mathrm {S} - \\text {h e a d} _ {k + 1} \\dots \\mathrm {S} - \\text {h e a d} _ {h}) W _ {\\text {a t t n}} ^ {O}, \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.394, + 0.784, + 0.469 + ], + "angle": 0, + "content": "where \\(\\Delta t_{i}\\) represents the time offset of the key-value pair of the \\(i\\)-th head. We did not directly use temporal attention or temporal convolution for the temporal modeling like previous works [38, 48, 71]. Instead, we design a more efficient spatiotemporal modeling operator by decoupling spatial modeling and temporal modeling to different heads:" + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.481, + 0.785, + 0.526 + ], + "angle": 0, + "content": "- For the spatial head, it still only needs to complete the spatial modeling task as the original image transformer, which reduces the difficulty of achieving image-to-video adaptation." + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.527, + 0.785, + 0.632 + ], + "angle": 0, + "content": "- For the temporal head, it actually implements the inter-frame attention mechanism with frames at different times. [74] have demonstrated the effectiveness of an inter-frame attention mechanism for modeling motion information, which is crucial for action recognition tasks. In addition, as shown in Table 1c, we can achieve both short-distance and long-distance modeling by controlling the \\(\\Delta t_{i}\\) of the temporal head, which enables us to achieve enhanced temporal modeling capabilities." + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.481, + 0.785, + 0.632 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Comparison with other zero-cost operators There have been several previous attempts [6, 66, 75] to use image transformers to achieve efficient temporal modeling at zero parameters and zero computation. For example, [6] achieves approximations to full space-time attention by mixing tokens from adjacent frames. [75] performs temporal modeling by using channel shift on thecls tokens of different frames. [66] mixes information from adjacent frames using temporal patch shift and temporal channel shift before MHSA. However, these methods do not take advantage of the inherent characteristics of the transformer structure. By decoupling the learning of spatial and temporal information with head relocation, STDHA maintains the purity of key-value pair information within the same head, thereby achieving better spatial-temporal information learning than other zero-cost temporal modules. And STDHA simultaneously captures both short-range and long-range dependencies, rather than being limited to" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "7" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.178 + ], + "angle": 0, + "content": "adjacent frames. As shown in Table 1, these two key distinctions enable our STDHA to achieve superior spatial-temporal modeling." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.197, + 0.7, + 0.213 + ], + "angle": 0, + "content": "3.3 Zero Extra Inference Cost image-to-video adaptation" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.219, + 0.784, + 0.308 + ], + "angle": 0, + "content": "Inspired by LoRA [22], we can fine-tune the model using a linear structure and then merge it with the original model during inference. However, to deal with the domain gap between images and videos, previous works [38,48,71] often use nonlinear structures to achieve stronger transfer capabilities. Therefore, we need to further consider how to achieve effective image-to-video transfer using only a linear structure." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.31, + 0.784, + 0.37 + ], + "angle": 0, + "content": "Layer merging via structural reparameterization Let \\( W_{\\mathrm{old}} \\) represent the frozen weights of the original model, and \\( W_{\\mathrm{new}} \\) represent the new trainable weights. Reviewing the structure of LoRA, it uses a low-rank decomposition matrix \\( W_{\\mathrm{LoRA}} \\) parallel to the original weights:" + }, + { + "type": "equation", + "bbox": [ + 0.342, + 0.379, + 0.786, + 0.395 + ], + "angle": 0, + "content": "\\[\nW _ {\\text {n e w}} = W _ {\\text {L o R A}} + W _ {\\text {o l d}} = W _ {\\text {u p}} W _ {\\text {d o w n}} + W _ {\\text {o l d}}. \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.403, + 0.784, + 0.448 + ], + "angle": 0, + "content": "In this work, we use a serial linear structure called Linear Adapter to fine-tune the original parameters. As shown in Figure 2a, we use structural reparameterization to perform layer merging after training:" + }, + { + "type": "equation", + "bbox": [ + 0.335, + 0.457, + 0.786, + 0.473 + ], + "angle": 0, + "content": "\\[\nW _ {\\text {n e w}} = W _ {\\text {A d a p t e r}} W _ {\\text {o l d}} = \\left(I + W _ {\\text {u p}} W _ {\\text {d o w n}}\\right) W _ {\\text {o l d}}, \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.481, + 0.784, + 0.572 + ], + "angle": 0, + "content": "where \\(I\\) is the identity matrix, \\(W_{\\mathrm{up}} \\in \\mathbb{R}^{m \\times k}\\), \\(W_{\\mathrm{down}} \\in \\mathbb{R}^{k \\times n}\\), bottleneck width \\(k \\ll \\min(m, n)\\). As seen in Table 2, compared to parallel structures, serial structures can be more flexibly inserted into the network structure (e.g., for non-square matrices, under the same bottleneck dimension, using LoRA requires a larger number of parameters compared to Linear Adapter), which endows it with better transfer capabilities." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.573, + 0.785, + 0.739 + ], + "angle": 0, + "content": "Full adaptation with densely placed linear adapters By observing the structure of MHSA and MLP, we can see that all their trainable parameters concentrate on the linear projections at both ends of the structure. Therefore, fine-tuning the model essentially updates these linear projections. Previous works [48, 71] often selectively tune part of the parameters (e.g., placing only an adapter before MHSA) instead of tuning all parameters to avoid excessive additional computational and parameter costs, while we can achieve zero-cost full adaptation by tuning all parameters through wrapping MHSA and MLP with linear adapters. Table 2 shows that full adaptation enables us to achieve excellent image-to-video transfer performance with a linear structure, compensating for the performance degradation caused by the removal of nonlinearity." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.76, + 0.377, + 0.777 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.789, + 0.418, + 0.804 + ], + "angle": 0, + "content": "4.1 Experiments setup" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.784, + 0.841 + ], + "angle": 0, + "content": "We evaluate our method on five widely-used video recognition benchmarks: two large-scale datasets, namely Kinetics-400 (K400) [8] and Something-Something V2" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.149, + 0.788, + 0.233 + ], + "angle": 0, + "content": "Table 1: Ablation study on STDHA. Most of the symbols in the table have been declared in the methodology section 3. (a) \\( R_{c} \\) denotes channel change ratio, \"Shift\" refers to temporal channel shift, while \"HR\" denotes head relocation as used by STDHA. (b) We use a multiset to represent the time offsets of different heads (e.g., \"1·2\" means that there are 2 heads with \\( \\Delta t = 1 \\)). When \\( \\Delta t = 0 \\), it represents a spatial head. (c) \"Temporal RF\" refers to the temporal receptive field of a single STDHA." + }, + { + "type": "table", + "bbox": [ + 0.237, + 0.246, + 0.479, + 0.341 + ], + "angle": 0, + "content": "
RcMethodTop-1
1/6[cls] token shift61.4
Shift QKV64.5
Shift KV64.6
HR QKV64.8
HR KV (STDHA)66.0
1/4Shift KV64.0
HR KV (STDHA)65.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.235, + 0.343, + 0.476, + 0.355 + ], + "angle": 0, + "content": "(a) Compare temporal modeling methods" + }, + { + "type": "table", + "bbox": [ + 0.498, + 0.245, + 0.783, + 0.343 + ], + "angle": 0, + "content": "
BackboneΔt of headskTop-1
ViT-B (h=12){1·1/2, -1·1/2, 0·11}164.8
{1·1, -1·1, 0·10}266.0
{1·2, -1·2, 0·8}465.6
{1·3, -1·3, 0·6}665.6
ViT-L (h=16){1·1, -1·1, 0·14}267.7
{1·2, -1·2, 0·12}468.5
{1·3, -1·3, 0·10}668.3
" + }, + { + "type": "table_caption", + "bbox": [ + 0.521, + 0.345, + 0.753, + 0.357 + ], + "angle": 0, + "content": "(b) Effect of the temporal head number" + }, + { + "type": "table", + "bbox": [ + 0.278, + 0.368, + 0.731, + 0.54 + ], + "angle": 0, + "content": "
FramesΔt of headsTemporal RFTop-1
8{1·1,0·11}264.7
{1·1,-1·1,0·10}366.0
{1·1,-1·1,2·1,0·9}465.5
{1·1,-1·1,2·1,-2·1,0·8}565.7
16{1·1,-1·1,0·10}367.2
{1·1,-1·1,2·1,0·9}467.3
{1·1,-1·1,2·1,-2·1,0·8}567.8
{1·1,-1·1,2·1,-2·1,3·1,0·7}667.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}767.3
32{1·1,-1·1,0·10}367.3
{1·1,-1·1,2·1,0·9}467.8
{1·1,-1·1,2·1,-2·1,0·8}568.5
{1·1,-1·1,2·1,-2·1,3·1,0·7}668.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}768.4
{1·1,-1·1,2·1,-2·1,3·1,-3·1,4·1,0·5}868.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.307, + 0.542, + 0.694, + 0.554 + ], + "angle": 0, + "content": "(c) Effect of the temporal receptive field at different input lengths." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.595, + 0.788, + 0.687 + ], + "angle": 0, + "content": "(SSv2) [16], in addition to three smaller-scale datasets, UCF101 [54], HMDB51 [25] and Diving48 [35]. We also evaluate our method on action detection dataset AVA [17]. This diverse dataset selection allows for a comprehensive evaluation of our model across various scales and domains. The specific model configuration and training strategy can be found in the supplementary. For most main experiments, we use ViT-B and ViT-L pre-trained by CLIP [51] as our backbone models." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.709, + 0.387, + 0.725 + ], + "angle": 0, + "content": "4.2 Ablation study" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.734, + 0.788, + 0.794 + ], + "angle": 0, + "content": "To validate the effectiveness of our method on image-to-video transfer and temporal modeling, we first conduct ablation experiments on the SSv2 dataset. All ablation experiments were performed using ViT-B/16 with 8 input frames unless specified." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.795, + 0.788, + 0.842 + ], + "angle": 0, + "content": "Effectiveness of STDHA Table 1a compares STDHA with other zero-cost temporal modeling methods. The [cls] token shift is implemented according to the original paper [75], with [cls] token shift performed before MHSA and MLP." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.149, + 0.788, + 0.233 + ], + "angle": 0, + "content": "Table 2: Comparison of adaption strategies. \"Width\" refers to the bottleneck width of LoRA/Adapter. \"Tunable Params\" refers to extra trainable parameters besides the parameters of the ViT backbone and linear classifier. \"\\(\\checkmark\\)\" and \"\\(\\times\\)\" indicate whether the corresponding weights have undergone fine-tuning, and \"\\(\\checkmark\\)\" indicates that \\(W_{\\mathrm{attn}}^{Q}\\), \\(W_{\\mathrm{attn}}^{K}\\) and \\(W_{\\mathrm{attn}}^{V}\\) share the same adapter. \"Latency\" refers to inference latency with 3 samples. All results are obtained using the same V100-32G with PyTorch-built mixed precision." + }, + { + "type": "table", + "bbox": [ + 0.221, + 0.244, + 0.784, + 0.523 + ], + "angle": 0, + "content": "
MethodWeights of ViT blockTunable \nParams(M)Bottleneck \nWidthLatencySSv2 \n(ms)Top-1
WQattnWKattnWVattnWOattnWupmlpWdownmlp
Full Fine-tuning86-28.963.2
Linear ProbeXXXXXX0-28.920.0
Only tuning temporal headXX4.6-28.959.6
ST-Adapter [48]1419241.066.2
XX1438438.865.8
LoRA [22]XXXX719264.2
XX1419265.0
XX2519264.3
XX1712828.965.6
3219265.0
2112865.5
Adapter w/ GELU79637.365.6
XX719234.964.6
X1019236.366.1
1419238.466.1
Linear Adapter (Ours)79665.0
XX719264.4
X1019228.965.2
1419266.0
2019266.3
1412866.2
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.576, + 0.788, + 0.637 + ], + "angle": 0, + "content": "The temporal channel shift operation refers to TPS [66], which shifts a portion of the channels for each head. It can be seen that STDHA significantly outperforms other methods at the same channel change ratio, demonstrating the importance of preserving the purity of information within each head." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.648, + 0.788, + 0.77 + ], + "angle": 0, + "content": "Effect of the number of temporal heads and temporal receptive field We examined the influence of the number of temporal heads and the temporal receptive field in ViT-B and ViT-L. Our findings, detailed in Tables 1b and 1c, suggest that the optimal proportion of temporal heads in ViT lies between \\(1/6\\) and \\(1/4\\). For the temporal receptive field, our results indicate that for 8-frame inputs, a field of 3 is sufficient, while for longer inputs (16/32 frames), performance improves with an increase in the field from 3, saturating at around 5 or 6. Hence, we employ different STDHA configurations based on input length." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Comparison of adaptation strategies In Table 2, we compare the image-to-video transfer ability of our method with a diverse range of adaptation methods. For a fair comparison, we all use STDHA with the same setting to provide temporal modeling capabilities. From the results, we can observe that:" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.149, + 0.788, + 0.219 + ], + "angle": 0, + "content": "Table 3: Results on Kinetics-400 validation set. Views = #frames × #spatial crops × #temporal clips. \"GFLOPs\" means \\(10^{9}\\) FLOPs, \"M\" means \\(10^{6}\\). \"Extra GLOPs\" refers to the extra computation added to the original ViT under the same number of views. \"New Params\" refers to additional parameters during inference besides the parameters of the original ViT backbone and linear classifier." + }, + { + "type": "table", + "bbox": [ + 0.248, + 0.231, + 0.756, + 0.505 + ], + "angle": 0, + "content": "
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
UniFormer-B [28]IN1K32×3×43108-50-83.095.4
TimeSformer-L [4]IN21K96×3×17140-121-80.794.7
VideoSwin-L [41]IN21K32×3×47248-197-83.195.9
MViTv2-L(↑312) [34]IN21K40×5×342420-218-86.197.0
ViViT-L/16x2 FE [1]JFT32×3×111940-311-83.594.3
MTV-L [70]JFT32×3×418050-876-84.396.3
ViT-B/16 [48]CLIP8×1×3422086081.095.5
ActionCLIP-B/16 [62]CLIP32×3×1016893131425683.897.1
X-CLIP ViT-L/14 [45]CLIP8×3×4789610742011687.197.6
Text4Vis ViT-L/14 [65]CLIP32×3×419944-3474387.197.4
Methods with PETL
VideoPrompt ViT-B/16 [24]CLIP16×5×1----76.993.5
ST-Adapter ViT-B/16 [48]IN21K8×1×34553393776.6-
ST-Adapter ViT-L/14 [48]CLIP32×1×382483221987.297.6
EVL ViT-B/16 [38]IN21K8×1×3454321152975.4-
EVL ViT-L/14 [38]CLIP8×1×32022763625886.3-
AIM ViT-B-14 [71]IN21K8×1×36242021001478.8-
AIM ViT-L/14 [71]CLIP32×1×31120834253413887.597.7
Zeroi2V ViT-B/16IN21K8×1×3422086078.6-
Zeroi2V ViT-B/16CLIP8×1×3422086083.095.8
Zeroi2V ViT-B/16CLIP16×1×3844086083.496.2
Zeroi2V ViT-B/16CLIP32×1×31688086083.796.4
Zeroi2V ViT-L/14CLIP8×1×319460304086.397.4
Zeroi2V ViT-L/14CLIP16×1×338920304086.897.6
Zeroi2V ViT-L/14CLIP32×1×377830304087.297.6
" + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.544, + 0.787, + 0.59 + ], + "angle": 0, + "content": "- Even with minimal parameters being fine-tuned, our Linear Adapter significantly outperforms full fine-tuning (66.3 vs 63.2). Despite updating the fewest parameters, the linear probe performs poorly in image-to-video transfer." + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.595, + 0.787, + 0.656 + ], + "angle": 0, + "content": "- Tuning only the temporal head achieves about \\(95\\%\\) of the full fine-tuning performance, suggesting that extensive fine-tuning of the spatial head may not be necessary to attain satisfactory transfer performance due to the decoupling of spatial and temporal modeling reduces the difficulty of adaptation." + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.66, + 0.787, + 0.721 + ], + "angle": 0, + "content": "- Our Full Adaptation strategy is not only effective for linear adapters, but also for non-linear adapters such as the ST-Adapter and GELU Adapter. It not only enhances their adaptation performance, but also eliminates the performance gap between linear and non-linear structures." + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.727, + 0.787, + 0.817 + ], + "angle": 0, + "content": "- Due to the inflexibility of the parallel structure, for non-square matrices like \\( W_{\\mathrm{mlp}} \\), LoRA requires more parameters under the same bottleneck width. It needs to decrease the bottleneck width of the low-rank matrix to align it with the number of parameters of the linear adapter. However, this reduction in bottleneck width can limit its adaptation ability, ultimately leading to results that are significantly worse than those of the Linear Adapter." + }, + { + "type": "list", + "bbox": [ + 0.225, + 0.544, + 0.787, + 0.817 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "table_caption", + "bbox": [ + 0.215, + 0.149, + 0.788, + 0.19 + ], + "angle": 0, + "content": "Table 4: Results on Something-Something v2 validation set. \\(\\dagger\\) indicates that the model is pre-trained on both IN21K (except for Uniformer [28] which uses IN1K) and K400/K600. Other notations are the same as Table 3." + }, + { + "type": "table", + "bbox": [ + 0.246, + 0.204, + 0.756, + 0.468 + ], + "angle": 0, + "content": "
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
TimeFormer-L [4]IN21K64×3×17140-121-62.4-
ViViT-L [1]K400†16×3×411892-311-65.489.8
MTV-B(↑320) [70]K400†32×3×411160-310-68.590.4
VideoSwin-B [41]K400†32×3×1963-89-69.692.7
MViTv2-L(↑312) [34]K400†40×3×18484-213-73.394.1
UniFormer-B [28]K600†32×3×1777-50-71.292.8
ViT-L/14 [12]CLIP8×3×119460304048.777.5
ILA ViT-L/14 [58]CLIP8×3×410884310052922567.890.5
Methods with PETL
ST-Adapter ViT-B/16 [48]IN21K8×3×14553393762.8-
ST-Adapter ViT-B/16 [48]CLIP32×3×119552671001469.592.6
EVL ViT-L/14 [38]CLIP32×3×19641185847917566.7-
AIM ViT-B/16IN21K8×3×16242021001462.0-
AIM ViT-L/14 [71]CLIP32×3×11150837253545070.692.7
ZeroI2V ViT-B/16IN21K8×3×1422086065.3-
ZeroI2V ViT-B/16CLIP8×3×1422086067.790.8
ZeroI2V ViT-B/16CLIP16×3×1844086069.491.7
ZeroI2V ViT-B/16CLIP32×3×11688086070.192.4
ZeroI2V ViT-L/14CLIP8×3×119460304070.191.8
ZeroI2V ViT-L/14CLIP16×3×138920304071.493.0
ZeroI2V ViT-L/14CLIP32×3×177830304072.293.0
" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.497, + 0.506, + 0.511 + ], + "angle": 0, + "content": "4.3 Fully-supervised Experiments" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.523, + 0.788, + 0.765 + ], + "angle": 0, + "content": "Results on K400 As shown in Table 3, our method has significant advantages over traditional full fine-tuning methods, achieving better performance with much lower computational cost. For example, our ZeroI2V ViT-L/14 with an input of 8 frames outperforms MViTv2 [34] (86.3 vs 86.1), while requiring more than 20 times fewer GFLOPs (1946 vs 42420). Compared to multi-modal methods such as ActionCLIP [62] and X-CLIP [45], which require an additional text branch and fine-tune the entire model end-to-end, our ZeroI2V can achieve comparable performance using only the visual encoder. Moreover, although our proposed ZeroI2V doesn't increase computational or parameter costs during inference compared with the previous PETL method, it can still achieve similar or even better performance. For example, on ViT-B/16, ZeroI2V with an input of 8 frames can surpass ST-Adapter [48] with an input of 32 frames (83.0 vs 82.7) with much lower GFLOPs (422 vs 1821). On ViT-L/14, ZeroI2V achieves the same performance as EVL [38], which requires an additional 58M parameters. And ZeroI2V achieves comparable performance to AIM [71] (87.2 vs 87.5) with a nearly \\(30\\%\\) reduction in GFLOPs (7783 vs 11208)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.765, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Results on SSv2 As shown in Table 4, thanks to the effectiveness of STDHA in temporal modeling, our method outperforms most full fine-tuning methods, even though many of them have been pre-trained on the Kinetics dataset. Our ZeroI2V has a significant improvement compared to directly full fine-tuning ViT-L/16 pre-trained with CLIP (70.1 vs 48.7) with the same number of parameters" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.348, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.149, + 0.788, + 0.205 + ], + "angle": 0, + "content": "Table 5: Comparing the state-of-the-art video recognition methods on UCF101, HMDB51 and Diving48. For UCF101 and HMDB51, we test our method and report the 3-split mean Top-1 accuracy for both datasets following ST-Adapter [48]. And for Diving48, we test our method with 1 temporal clip following AIM [71]." + }, + { + "type": "table", + "bbox": [ + 0.261, + 0.217, + 0.739, + 0.418 + ], + "angle": 0, + "content": "
MethodPretrainUCF101HMDB51Diving48
Methods with full fine-tuning
I3D [8]ImageNet+K40095.674.8-
S3D [67]ImageNet+K40096.875.9-
SlowOnly-8x8-R101 [15]Kinetics+OmniSource97.379.0-
TimeSformer-L [4]IN21K--81.0
VideoSwin-B [41]IN21K--81.9
Methods with PETL
VideoPrompt [24]CLIP93.666.4-
AIM ViT-B/16 [71]CLIP--88.9
AIM ViT-L/14 [71]CLIP--90.6
ST-Adapter ViT-B/16 [48]CLIP+K40096.477.7-
ST-Adapter ViT-L/14 [48]CLIP+K40098.181.7-
ZeroI2V ViT-B/16CLIP95.673.789.7
ZeroI2V ViT-B/16CLIP+K40097.778.5-
ZeroI2V ViT-L/14CLIP97.879.991.4
ZeroI2V ViT-L/14CLIP+K40098.683.4-
" + }, + { + "type": "table_caption", + "bbox": [ + 0.237, + 0.435, + 0.764, + 0.45 + ], + "angle": 0, + "content": "Table 6: Comparing the SoTA action detection methods on AVA 2.2." + }, + { + "type": "table", + "bbox": [ + 0.261, + 0.462, + 0.739, + 0.553 + ], + "angle": 0, + "content": "
MethodPretrainFrozen BackboneFramesmAP
SlowFast-R101 [15]K400823.8
MViTv2-B [34]K4003228.1
VideoMAE-B [56]K4001631.8
VideoMAE-B [56]K400 wo/ labels1626.7
CLIP ViT-B/16CLIP818.3
ZeroI2V ViT-B/16CLIP826.4
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.583, + 0.788, + 0.658 + ], + "angle": 0, + "content": "and computation. Compared to other PETL methods, ZeroI2V outperforms ST-Adapter [48] on ViT-B/16 (70.1 vs 69.5) with lower GFLOPs (1688 vs 1955). Additionally, ZeroI2V significantly surpasses both AVL [38] and AIM [71] (71.4 vs 66.7, 70.6) on ViT-L/14 with much lower GFLOPs (3892 vs 9641, 11508) and new parameters (0M vs 175M, 50M)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.659, + 0.788, + 0.719 + ], + "angle": 0, + "content": "Results on smaller datasets As shown in Table 5, on three relatively small datasets, our method achieves state-of-the-art performance on UCF101, HMDB51, and Diving48. This demonstrates a clear performance advantage over both full-finetuning methods and PETL methods previously." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Results on action detection In addition to the task of action recognition, to understand the capability of our method in fine-grained spatial understanding, we also evaluate our method on action detection dataset AVA [17]. Following the setting of VideoMAE [56], we evaluate the top 60 common classes using the mean Average Precision (mAP) as the metric under an IoU threshold of 0.5. As shown in Table 6, compared to using the original image CLIP features, our ZeroI2V achieved a significant performance improvement (26.4 vs 18.3) with the same number of parameters and computation. It's noteworthy that our method was not" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.148, + 0.784, + 0.164 + ], + "angle": 0, + "content": "Table 7: Comparing the SoTA video recognition methods on the VidTAB [32]." + }, + { + "type": "table", + "bbox": [ + 0.248, + 0.188, + 0.756, + 0.258 + ], + "angle": 0, + "content": "
# Pretrain DataAvgActionScienceSafetyQualityEmotion
DS LVMS ABHC FFQAEA
CLIP ViT-L/14 [51]CLIP42.831.2 38.032.3 36.350.3 58.567.728.1
ViCLIP ViT-L/14 [64]CLIP+InternVid200M42.736.7 43.930.2 36.846.9 54.865.427.2
ST-Adapter ViT-L/14 [48]CLIP46.943.0 45.031.2 39.449.4 64.972.329.9
ZeroI2V ViT-L/14CLIP46.541.3 46.831.2 39.347.2 64.670.630.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.287, + 0.788, + 0.331 + ], + "angle": 0, + "content": "Table 8: Inference latency and throughput. All results are obtained using the same V100-32G with PyTorch-built mixed precision, using a batch size of 1 to measure latency and the optimal possible batch size to measure throughput before out of memory." + }, + { + "type": "table", + "bbox": [ + 0.232, + 0.342, + 0.768, + 0.408 + ], + "angle": 0, + "content": "
ModelViewsGFLOPsLatency (ms)Throughput (V/s)K400 (Top-1)SSv2 (Top-1)
Uniformer-B [28]32×41036245.384.2482.9-
EVL ViT-B/16 [38]8×345453.8724.0482.961.0
ViT-B/16 [12]8×342228.7240.0881.044.0
Zerol2V ViT-B/168×342228.8940.0883.067.7
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.436, + 0.788, + 0.527 + ], + "angle": 0, + "content": "pre-trained on action recognition datasets such as Kinetics. Instead, we directly applied image-to-video transfer on the AVA dataset. Remarkably, our method still managed to achieve performance on par with full-finetuning methods and self-supervised methods that underwent pre-training using the Kinetics dataset, even when using only 8 frames as input. In summary, our ZeroI2V demonstrates outstanding potential in video tasks beyond recognition." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.551, + 0.446, + 0.567 + ], + "angle": 0, + "content": "4.4 Few-shot Experiments" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.578, + 0.788, + 0.669 + ], + "angle": 0, + "content": "To demonstrate the adaptation capability of our method in few-shot scenarios, we conduct experiments on the Video Task Adaptation Benchmark (VidTAB). As show in Table 7 The results show that our method can effectively enhance the adaptation of the image model to video tasks using only a few samples. Compared to ST-Adapter [48], our approach achieves comparable results while enjoying the advantage of parameter and inference efficiency." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.692, + 0.414, + 0.709 + ], + "angle": 0, + "content": "4.5 Efficiency analysis" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.72, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Comparison of inference efficiency We compared the inference efficiency of our method with other methods on the same hardware device. As shown in Table 8, under comparable accuracy, the throughput of our method is 10 times that of Uniformer [28], Compared to the original ViT-B, our method introduces negligible additional latency during inference while achieving superior performance. In comparison with AVL [38], it can also be seen that the impact of the additional computational module on the actual runtime latency (28.89 ms vs 53.87 ms) is greater than that reflected by GFLOPs (422 vs 454)." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.348, + 0.128 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.149, + 0.788, + 0.205 + ], + "angle": 0, + "content": "Table 9: Comparison of training cost. Our results are obtained using the same V100-32G with PyTorch-built mixed precision, following AVL [38]. \"†\" indicates that the epoch is estimated based on the batch size and training steps of the original paper. \"Memory\" refers to the GPU memory usage when the batch size is 8." + }, + { + "type": "table", + "bbox": [ + 0.232, + 0.217, + 0.769, + 0.357 + ], + "angle": 0, + "content": "
Model (Frames)DatasetTraining EpochsTraining GPU HoursTunable Param (M)Memory (G)Top-1
Uniformer-B [28] (32)K4001105000 × V10050-82.9
ActionCLIP ViT-B/16 [62] (16)K40050480 × RTX3090142-82.6
EVL ViT-B/16 [38] (8)K40053†60 × V100292.282.9
SSv246†75 × V100985.661.0
ST-Adapter ViT-B/16 [48] (8)K40011†23 × V10076.982.0
SSv238†60 × V100147.667.1
AIM ViT-B/16 [71] (8)K40030120 × V100118.783.9
SSv250150 × V100149.066.4
ZeroI2V ViT-B/16 (8)K40040100 × V100147.683.0
SSv25090 × V100147.667.3
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.382, + 0.789, + 0.594 + ], + "angle": 0, + "content": "Comparison of training cost We compared the training cost of our method with previous methods in Table 9. It can be seen that compared to previous full fine-tuning methods such as Uniformer [28] and ActionCLIP [62], our method significantly reduces training cost. Compared to the previous PETL method, our method does not have a significant advantage in training efficiency due to the use of dense adapters. AVL [38], which does not need to insert adapters into the frozen backbone, avoids some of the cost of backpropagation and therefore has lower memory usage. ST-Adapter [48], due to its fewer trainable parameters, has a faster convergence speed, but its memory usage is close to our method. Nonetheless, in contrast to AIM [71] that imposes an additional computational burden for temporal modeling, our STDHA method, which does not introduce extra learnable parameters, ensures that ZeroI2V maintains superior training efficiency. We believe that it is worthwhile and acceptable to exchange some training costs for a reduction in inference costs." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.615, + 0.371, + 0.631 + ], + "angle": 0, + "content": "5 Conclusions" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.644, + 0.789, + 0.841 + ], + "angle": 0, + "content": "In this work, we present a new approach for parameter-efficient image-to-video transfer learning, called ZeroI2V. By fully leveraging the powerful representational capabilities of pre-trained image models, our approach enables image transformers to perform video tasks without introducing extra costs during inferences. Our proposed STDHA achieves efficient spatial-temporal modeling at zero extra computation and parameters. In addition, through structural reparameterization and full adaptation strategies, we successfully use a linear structure to achieve zero extra inference cost image-to-video adaptation for the first time. ZeroI2V shows strong performance compared to previous full fine-tuning and PETL methods on widely used video understanding benchmarks while maintaining parameter and inference efficiency. Due to the simplicity and versatility of our method, we believe it can be easily extended to other video tasks and even multi-modal understanding tasks. We will further investigate this direction in future work." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.222 + ], + "angle": 0, + "content": "Acknowledgements. This work is supported by the National Key R&D Program of China (No. 2022ZD0160900), the National Natural Science Foundation of China (No. 62076119, No. 61921006), the Fundamental Research Funds for the Central Universities (No. 020214380119), and the Collaborative Innovation Center of Novel Software Technology and Industrialization." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.244, + 0.323, + 0.26 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.274, + 0.787, + 0.304 + ], + "angle": 0, + "content": "1. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lucic, M., Schmid, C.: Vivit: A video vision transformer. In: Int. Conf. Comput. Vis. pp. 6816-6826 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.304, + 0.787, + 0.331 + ], + "angle": 0, + "content": "2. Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.331, + 0.787, + 0.358 + ], + "angle": 0, + "content": "3. Bao, H., Dong, L., Piao, S., Wei, F.: Beit: BERT pre-training of image transformers. In: Int. Conf. Learn. Represent. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.359, + 0.787, + 0.385 + ], + "angle": 0, + "content": "4. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? In: Int. Conf. Mach. Learn. vol. 139, pp. 813-824 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.386, + 0.787, + 0.428 + ], + "angle": 0, + "content": "5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: Adv. Neural Inform. Process. Syst. vol. 33, pp. 1877-1901 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.429, + 0.787, + 0.469 + ], + "angle": 0, + "content": "6. Bulat, A., Pérez-Rúa, J., Sudhakaran, S., Martínez, B., Tzimiropoulos, G.: Spacetime mixing attention for video transformer. In: Adv. Neural Inform. Process. Syst. pp. 19594-19607 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.469, + 0.787, + 0.51 + ], + "angle": 0, + "content": "7. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: Int. Conf. Comput. Vis. pp. 9630-9640 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.51, + 0.787, + 0.537 + ], + "angle": 0, + "content": "8. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4724-4733 (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.537, + 0.787, + 0.578 + ], + "angle": 0, + "content": "9. Chen, S., Ge, C., Tong, Z., Wang, J., Song, Y., Wang, J., Luo, P.: Adaptformer: Adapting vision transformers for scalable visual recognition. In: Adv. Neural Inform. Process. Syst. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.579, + 0.787, + 0.62 + ], + "angle": 0, + "content": "0. Cherti, M., Beaumont, R., Wightman, R., Wortsman, M., Ilharco, G., Gordon, C., Schuhmann, C., Schmidt, L., Jitsev, J.: Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.62, + 0.787, + 0.662 + ], + "angle": 0, + "content": "1. Devlin, J., Chang, M., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT. pp. 4171-4186 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.662, + 0.787, + 0.717 + ], + "angle": 0, + "content": "2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Int. Conf. Learn. Represent. (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.717, + 0.787, + 0.744 + ], + "angle": 0, + "content": "3. Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscale vision transformers. In: Int. Conf. Comput. Vis. pp. 6804-6815 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.744, + 0.787, + 0.772 + ], + "angle": 0, + "content": "4. Feichtenhofer, C.: X3D: expanding architectures for efficient video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 200-210 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.772, + 0.787, + 0.799 + ], + "angle": 0, + "content": "5. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Int. Conf. Comput. Vis. pp. 6201-6210 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.799, + 0.787, + 0.84 + ], + "angle": 0, + "content": "6. Goyal, R., Kahou, S.E., Michalski, V., Materzynska, J., Westphal, S., Kim, H., Haenel, V., Fründ, I., Yianilos, P., Mueller-Freitag, M., Hoppe, F., Thurau, C., Bax, I., Memisevic, R.: The \"something something\" video database for learning" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.274, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.245, + 0.148, + 0.787, + 0.175 + ], + "angle": 0, + "content": "and evaluating visual common sense. In: Int. Conf. Comput. Vis. pp. 5843-5851. IEEE Computer Society (2017)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.177, + 0.788, + 0.232 + ], + "angle": 0, + "content": "17. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 6047-6056 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.233, + 0.788, + 0.274 + ], + "angle": 0, + "content": "18. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.B.: Masked autoencoders are scalable vision learners. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 15979-15988 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.276, + 0.788, + 0.317 + ], + "angle": 0, + "content": "19. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 9726-9735 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.318, + 0.787, + 0.346 + ], + "angle": 0, + "content": "20. He, X., Li, C., Zhang, P., Yang, J., Wang, X.E.: Parameter-efficient model adaptation for vision transformers. arXiv preprint arXiv:2203.16329 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.347, + 0.787, + 0.388 + ], + "angle": 0, + "content": "21. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., de Laroussilhe, Q., Gesmundo, A., Attariyan, M., Gelly, S.: Parameter-efficient transfer learning for NLP. In: Int. Conf. Mach. Learn. vol. 97, pp. 2790-2799 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.389, + 0.787, + 0.43 + ], + "angle": 0, + "content": "22. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: Lora: Low-rank adaptation of large language models. In: Int. Conf. Learn. Represent. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.431, + 0.787, + 0.459 + ], + "angle": 0, + "content": "23. Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: Eur. Conf. Comput. Vis. pp. 709-727 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.46, + 0.787, + 0.5 + ], + "angle": 0, + "content": "24. Ju, C., Han, T., Zheng, K., Zhang, Y., Xie, W.: Prompting visual-language models for efficient video understanding. In: Eur. Conf. Comput. Vis. pp. 105-124. Springer (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.502, + 0.787, + 0.543 + ], + "angle": 0, + "content": "25. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: a large video database for human motion recognition. In: Int. Conf. Comput. Vis. pp. 2556-2563. IEEE (2011)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.545, + 0.787, + 0.586 + ], + "angle": 0, + "content": "26. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045-3059 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.587, + 0.787, + 0.628 + ], + "angle": 0, + "content": "27. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Int. Conf. Mach. Learn. vol. 162, pp. 12888-12900 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.63, + 0.787, + 0.67 + ], + "angle": 0, + "content": "28. Li, K., Wang, Y., Gao, P., Song, G., Liu, Y., Li, H., Qiao, Y.: Uniformer: Unified transformer for efficient spatial-temporal representation learning. In: Int. Conf. Learn. Represent. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.672, + 0.787, + 0.713 + ], + "angle": 0, + "content": "29. Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Wang, L., Qiao, Y.: Uniformerv2: Unlocking the potential of image vits for video understanding. In: Int. Conf. Comput. Vis. pp. 1632-1643 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.715, + 0.787, + 0.742 + ], + "angle": 0, + "content": "30. Li, T., Wang, L.: Learning spatiotemporal features via video and text pair discrimination. arXiv preprint arXiv:2001.05691 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.743, + 0.787, + 0.798 + ], + "angle": 0, + "content": "31. Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582-4597 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.799, + 0.787, + 0.84 + ], + "angle": 0, + "content": "32. Li, X., Huang, Z., Wang, J., Li, K., Wang, L.: Videoeval: Comprehensive benchmark suite for low-cost evaluation of video foundation model. arXiv preprint arXiv:2407.06491 (2024)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.148, + 0.788, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "33. Li, Y., Ji, B., Shi, X., Zhang, J., Kang, B., Wang, L.: TEA: temporal excitation and aggregation for action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 906-915 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.19, + 0.788, + 0.231 + ], + "angle": 0, + "content": "34. Li, Y., Wu, C., Fan, H., Mangalam, K., Xiong, B., Malik, J., Feichtenhofer, C.: Mvitv2: Improved multiscale vision transformers for classification and detection. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4794-4804 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.216, + 0.232, + 0.786, + 0.259 + ], + "angle": 0, + "content": "35. Li, Y., Li, Y., Vasconcelos, N.: Resound: Towards action recognition without representation bias. In: Eur. Conf. Comput. Vis. pp. 513-528 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.26, + 0.787, + 0.286 + ], + "angle": 0, + "content": "36. Lian, D., Zhou, D., Feng, J., Wang, X.: Scaling & shifting your features: A new baseline for efficient model tuning. In: Adv. Neural Inform. Process. Syst. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.287, + 0.787, + 0.327 + ], + "angle": 0, + "content": "37. Lin, J., Gan, C., Wang, K., Han, S.: TSM: temporal shift module for efficient and scalable video understanding on edge devices. IEEE Trans. Pattern Anal. Mach. Intell. 44(5), 2760-2774 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.328, + 0.787, + 0.37 + ], + "angle": 0, + "content": "38. Lin, Z., Geng, S., Zhang, R., Gao, P., de Melo, G., Wang, X., Dai, J., Qiao, Y., Li, H.: Frozen CLIP models are efficient video learners. In: Eur. Conf. Comput. Vis. vol. 13695, pp. 388-404 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.371, + 0.786, + 0.397 + ], + "angle": 0, + "content": "39. Liu, M., Wang, Z., Ji, S.: Non-local graph neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 10270-10276 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.398, + 0.787, + 0.438 + ], + "angle": 0, + "content": "40. Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., Guo, B.: Swin transformer V2: scaling up capacity and resolution. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 11999-12009 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.439, + 0.787, + 0.466 + ], + "angle": 0, + "content": "41. Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., Hu, H.: Video swim transformer. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3192-3201 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.467, + 0.787, + 0.494 + ], + "angle": 0, + "content": "42. Liu, Z., Wang, L., Wu, W., Qian, C., Lu, T.: TAM: temporal adaptive module for video recognition. In: Int. Conf. Comput. Vis. pp. 13688-13698 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.495, + 0.787, + 0.535 + ], + "angle": 0, + "content": "43. Lu, C., Jin, X., Huang, Z., Hou, Q., Cheng, M., Feng, J.: CMAE-V: contrastive masked autoencoders for video action recognition. arXiv preprint arXiv:2301.06018 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.536, + 0.787, + 0.563 + ], + "angle": 0, + "content": "44. Michel, P., Levy, O., Neubig, G.: Are sixteen heads really better than one? In: Adv. Neural Inform. Process. Syst. pp. 14014-14024 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.565, + 0.787, + 0.605 + ], + "angle": 0, + "content": "45. Ni, B., Peng, H., Chen, M., Zhang, S., Meng, G., Fu, J., Xiang, S., Ling, H.: Expanding language-image pretrained models for general video recognition. In: Eur. Conf. Comput. Vis. vol. 13664, pp. 1-18 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.606, + 0.787, + 0.646 + ], + "angle": 0, + "content": "46. Nie, X., Ni, B., Chang, J., Meng, G., Huo, C., Zhang, Z., Xiang, S., Tian, Q., Pan, C.: Pro-tuning: Unified prompt tuning for vision tasks. arXiv preprint arXiv:2207.14381 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.647, + 0.787, + 0.729 + ], + "angle": 0, + "content": "47. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., Assran, M., Ballas, N., Galuba, W., Howes, R., Huang, P., Li, S., Misra, I., Rabbat, M.G., Sharma, V., Synnaeve, G., Xu, H., Jégou, H., Mairal, J., Labatut, P., Joulin, A., Bojanowski, P.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.73, + 0.787, + 0.757 + ], + "angle": 0, + "content": "48. Pan, J., Lin, Z., Zhu, X., Shao, J., Li, H.: St-adapter: Parameter-efficient image-to-video transfer learning. In: Adv. Neural Inform. Process. Syst. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.758, + 0.787, + 0.813 + ], + "angle": 0, + "content": "49. Pfeiffer, J., Kamath, A., Rückle, A., Cho, K., Gurevych, I.: Adapterfusion: Nondestructive task composition for transfer learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 487-503 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.814, + 0.787, + 0.84 + ], + "angle": 0, + "content": "50. Pfeiffer, J., Rückle, A., Poth, C., Kamath, A., Vulic, I., Ruder, S., Cho, K., Gurevych, I.: Adapterhub: A framework for adapting transformers. In: Proceedings of the" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "18" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.347, + 0.127 + ], + "angle": 0, + "content": "X. Li et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.241, + 0.147, + 0.786, + 0.176 + ], + "angle": 0, + "content": "2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 46-54 (2020)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.177, + 0.787, + 0.232 + ], + "angle": 0, + "content": "51. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Int. Conf. Mach. Learn. vol. 139, pp. 8748-8763 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.233, + 0.786, + 0.261 + ], + "angle": 0, + "content": "52. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI blog (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.262, + 0.786, + 0.29 + ], + "angle": 0, + "content": "53. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.291, + 0.786, + 0.318 + ], + "angle": 0, + "content": "54. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.319, + 0.786, + 0.347 + ], + "angle": 0, + "content": "55. Tan, J., Zhao, X., Shi, X., Kang, B., Wang, L.: Pointtad: Multi-label temporal action detection with learnable query points. NIPS 35, 15268-15280 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.348, + 0.787, + 0.388 + ], + "angle": 0, + "content": "56. Tong, Z., Song, Y., Wang, J., Wang, L.: Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In: Adv. Neural Inform. Process. Syst. (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.39, + 0.787, + 0.417 + ], + "angle": 0, + "content": "57. Tschannen, M., Mustafa, B., Houlsby, N.: Clippo: Image-and-language understanding from pixels only. arXiv preprint arXiv:2212.08045 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.418, + 0.787, + 0.446 + ], + "angle": 0, + "content": "58. Tu, S., Dai, Q., Wu, Z., Cheng, Z., Hu, H., Jiang, Y.: Implicit temporal modeling with learnable alignment for video recognition. In: Int. Conf. Comput. Vis. (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.447, + 0.787, + 0.488 + ], + "angle": 0, + "content": "59. Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., Qiao, Y.: Videomae V2: scaling video masked autoencoders with dual masking. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.489, + 0.787, + 0.53 + ], + "angle": 0, + "content": "60. Wang, L., Tong, Z., Ji, B., Wu, G.: TDN: temporal difference networks for efficient action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1895-1904 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.531, + 0.787, + 0.573 + ], + "angle": 0, + "content": "61. Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Gool, L.V.: Temporal segment networks: Towards good practices for deep action recognition. In: Eur. Conf. Comput. Vis. vol. 9912, pp. 20-36 (2016)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.574, + 0.787, + 0.601 + ], + "angle": 0, + "content": "62. Wang, M., Xing, J., Liu, Y.: Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.602, + 0.787, + 0.643 + ], + "angle": 0, + "content": "63. Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Jiang, Y., Zhou, L., Yuan, L.: BEVT: BERT pretraining of video transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 14713-14723 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.645, + 0.787, + 0.685 + ], + "angle": 0, + "content": "64. Wang, Y., He, Y., Li, Y., Li, K., Yu, J., Ma, X., Li, X., Chen, G., Chen, X., Wang, Y., et al.: Intervid: A large-scale video-text dataset for multimodal understanding and generation. In: ICLR (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.687, + 0.787, + 0.714 + ], + "angle": 0, + "content": "65. Wu, W., Sun, Z., Ouyang, W.: Revisiting classifier: Transferring vision-language models for video recognition. In: AAAI Conf. Artif. Intell. pp. 2847-2855 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.715, + 0.787, + 0.756 + ], + "angle": 0, + "content": "66. Xiang, W., Li, C., Wang, B., Wei, X., Hua, X., Zhang, L.: Spatiotemporal self-attention modeling with temporal patch shift for action recognition. In: Eur. Conf. Comput. Vis. vol. 13663, pp. 627-644 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.757, + 0.787, + 0.798 + ], + "angle": 0, + "content": "67. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Eur. Conf. Comput. Vis. pp. 305–321 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.8, + 0.787, + 0.84 + ], + "angle": 0, + "content": "68. Xu, C., Zhu, Y., Shen, H., Chen, B., Liao, Y., Chen, X., Wang, L.: Progressive visual prompt learning with contrastive feature re-formation. arXiv preprint arXiv:2304.08386 (2023)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.675, + 0.115, + 0.732, + 0.127 + ], + "angle": 0, + "content": "ZeroI2V" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "19" + }, + { + "type": "ref_text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "69. Xu, C., Zhu, Y., Zhang, G., Shen, H., Liao, Y., Chen, X., Wu, G., Wang, L.: Dpl: Decoupled prompt learning for vision-language models. arXiv preprint arXiv:2308.10061 (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.19, + 0.788, + 0.232 + ], + "angle": 0, + "content": "70. Yan, S., Xiong, X., Arnab, A., Lu, Z., Zhang, M., Sun, C., Schmid, C.: Multiview transformers for video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3323-3333 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.232, + 0.787, + 0.259 + ], + "angle": 0, + "content": "71. Yang, T., Zhu, Y., Xie, Y., Zhang, A., Chen, C., Li, M.: Aim: Adapting image models for efficient video action recognition. In: Int. Conf. Learn. Represent. (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.259, + 0.787, + 0.313 + ], + "angle": 0, + "content": "72. Zaken, E.B., Goldberg, Y., Ravfogel, S.: Bitfit: Simple parameter-efficient fin-tuning for transformer-based masked language-models. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 1-9 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.315, + 0.787, + 0.342 + ], + "angle": 0, + "content": "73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1204-1213 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.343, + 0.787, + 0.383 + ], + "angle": 0, + "content": "74. Zhang, G., Zhu, Y., Wang, H., Chen, Y., Wu, G., Wang, L.: Extracting motion and appearance via inter-frame attention for efficient video frame interpolation. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.384, + 0.787, + 0.411 + ], + "angle": 0, + "content": "75. Zhang, H., Hao, Y., Ngo, C.: Token shift transformer for video classification. In: ACM Int. Conf. Multimedia. pp. 917-925 (2021)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.411, + 0.787, + 0.439 + ], + "angle": 0, + "content": "76. Zhang, Y., Zhou, K., Liu, Z.: Neural prompt search. arXiv preprint arXiv:2206.04673 (2022)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.44, + 0.787, + 0.466 + ], + "angle": 0, + "content": "77. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Eur. Conf. Comput. Vis. vol. 11205, pp. 831-846 (2018)" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.468, + 0.787, + 0.508 + ], + "angle": 0, + "content": "78. Zhu, Y., Ji, Y., Zhao, Z., Wu, G., Wang, L.: Awt: Transferring vision-language models via augmentation, weighting, and transportation. arXiv preprint arXiv:2407.04603 (2024)" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.508, + 0.787, + 0.536 + ], + "angle": 0, + "content": "79. Zhu, Y., Zhang, G., Tan, J., Wu, G., Wang, L.: Dual detrs for multi-label temporal action detection. In: CVPR. pp. 18559-18569 (2024)" + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.536 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_origin.pdf b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c6459df25d94b92b38317b1479245e71e9a52ac9 --- /dev/null +++ b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/e56ddbcb-b08e-40b1-be59-3e4021eb99b9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58a3684759f74cbb8c7f8b034eca1a685e3cb6c6924e345e8842f64d47baf27a +size 845806 diff --git a/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/full.md b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fb597a8f8ebd563eb1c3dfe3fd8dfeb2d82f494d --- /dev/null +++ b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/full.md @@ -0,0 +1,332 @@ +# ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video + +Xinhao Li $^{1,2}$ , Yuhan Zhu $^{1}$ , and Limin Wang $^{1,2*}$ + +1 State Key Laboratory for Novel Software Technology, Nanjing University + +2 Shanghai AI Laboratory + +xinhaoli00@outlook.com zyuhan0812@gmail.com lmwang@nju.edu.cn + +https://github.com/MCG-NJU/ZeroI2V + +Abstract. Adapting image models to the video domain has emerged as an efficient paradigm for solving video recognition tasks. Due to the huge number of parameters and effective transferability of image models, performing full fine-tuning is less efficient and even unnecessary. Thus, recent research is shifting its focus toward parameter-efficient image-to-video adaptation. However, these adaptation strategies inevitably introduce extra computational costs to deal with the domain gap and temporal modeling in videos. In this paper, we present a new adaptation paradigm (ZeroI2V) to transfer the image transformers to video recognition tasks (i.e., introduce zero extra cost to the original models during inference). To achieve this goal, we present two core designs. First, to capture the dynamics in videos and reduce the difficulty of image-to-video adaptation, we exploit the flexibility of self-attention and introduce spatial-temporal dual-headed attention (STDHA). This approach efficiently endows the image transformers with temporal modeling capability at zero extra parameters and computation. Second, to handle the domain gap between images and videos, we propose a linear adaption strategy that utilizes lightweight densely placed linear adapters to fully transfer the frozen image models to video recognition. Thanks to the customized linear design, all newly added adapters could be easily merged with the original modules through structural reparameterization after training, enabling zero extra cost during inference. Extensive experiments on representative fully-supervised and few-shot video recognition benchmarks showcase that ZeroI2V can match or even outperform previous state-of-the-art methods while enjoying superior parameter and inference efficiency. + +Keywords: Video understanding $\cdot$ Image-to-video adaptation $\cdot$ PEFT + +# 1 Introduction + +Adapting pre-trained foundation models such as BERT [11] and GPT [5, 52, 53] through efficient strategies has yielded excellent performance on downstream tasks in natural language understanding. This new paradigm is becoming popular in + +![](images/9d885033d26f55992ceb6d9dd3af76d211ba17d868f74ae09e14e0fe9ce020f5.jpg) +Fig. 1: Left: Our proposed image-to-video transfer learning method. Right: Comparison of PETL methods on SSv2 validation set. For a more intuitive comparison, the views of the methods in the figure are all $8 \times 3 \times 1$ . Two core techniques enable us to achieve superior performance on video tasks without introducing additional computation and parameters during inference. + +![](images/3a140e021a7c64e10bc70d0d3348dd3dab076997e9ac9bc9482cb5ef8c8708d3.jpg) + +computer vision due to the available pre-trained image models such as CLIP [51] and DINO [7, 47]. These models could be easily adapted to downstream tasks through linear probes, fine-tuning, or even zero-shot recognition, exhibiting robustness and strong transfer capabilities similar to those of large-scale language models. Recently, parameter-efficient transfer learning (PETL) [9,23,38,46,48,78] is becoming an efficient paradigm to adapt these large pre-trained models due to their huge numbers of parameters and high computational cost of full fine-tuning. + +For video understanding, there exist several large pre-trained video models [56, 59] from self-supervised learning, but these models are of high computational complexity due to the joint spatiotemporal attentions. Therefore, adapting pretrained image models to the video domain through efficient strategies is still a practical solution to video recognition. In fact, the state-of-the-art video networks have long relied on the pre-trained image models by inflating the kernels [1,8,39,41] or inserting plug-and-play temporal modules [33,37,42,60,61]. However, most of these methods necessitate full fine-tuning, which involves updating all the model parameters during training on video datasets. As the scale of pre-trained models increases, full fine-tuning becomes impractical due to the high training costs and the risk of overfitting or even catastrophic forgetting when the downstream data is limited. In addition, these methods often inevitably introduce extra costs to the adapted video models due to these newly added modules. + +In this paper, we aim to present a new efficient paradigm of adapting image transformers to video downstream tasks with two main objectives. First, inspired by the PETL methods in NLP [21,22,26,31] and image understanding [9,23,46], we aim to devise a parameter-efficient transfer technique from image to video, which can effectively reduce the risk of over-fitting and greatly improve the training efficiency. Second, to overcome the issue of high computation in the adapted + +video models, we try to present a new adaptation method without introducing any extra computations to the final video models during inference. This zero extra inference cost adaptation would allow for more efficient deployment of transferred video models in real applications. + +To achieve the above two objectives, we propose a novel transfer learning method (as shown in Figure 1) that can utilize the off-the-shelf pre-trained image transformers to achieve excellent performance on video tasks without additional parameters and computation during inference. To be specific, for the temporal modeling required for video tasks, we transform multi-head self-attention into spatio-temporal dual-head attention (STDHA) by reassigning some heads to achieve temporal modeling at zero computation and zero parameters. For image-to-video transfer, we explore the strategy of using linear adapters to fully adapt the parameters of each part of the model and merge them with the frozen original parameters through structural reparameterization after training, thus achieving zero extra cost during inference. + +To summarize, we make the following contributions: 1) We propose a new approach for parameter-efficient image-to-video transfer learning that can achieve the efficient adaptation of transformers from image to video without introducing additional computation and parameters during inference. 2) We introduce a novel attention mechanism named Spatial-Temporal Dual-Headed Attention (STDHA), which utilizes the flexibility of self-attention to achieve temporal modeling without introducing extra computation and parameters. 3) To the best of our knowledge, we are the first to investigate the achievement of zero extra inference cost image-to-video adaptation through the utilization of a linear structure. We establish an empirical study by conducting extensive experiments with a diverse range of adaptation strategies. 4) Our method achieves comparable or even better performance than state-of-the-art methods on popular fully-supervised and few-shot video recognition benchmarks while enjoying the advantage of parameter and inference efficiency. + +# 2 Related work + +Pre-trained image transformers The powerful scalability of ViT [12] brings more possibilities to the pre-trained image model. In addition to the traditional supervised approach [12,40,73], recent works [3,7,18,19,47] utilize self-supervised learning to effectively learn representations from unlabeled data. Moreover, several works [10,27,51,57] adopt large-scale multi-modal data (e.g., text-image pairs) to learn visual representations with great transferability. Our proposed adaptation strategy can leverage these off-the-shelf pre-trained image transformers to achieve outstanding performance on video tasks. + +Video action recognition is crucial for downstream tasks [55, 79]. Traditionally, state-of-the-art methods have long relied on image models. Previous works for action recognition can be classified into two categories: one is to extend the image model for spatial-temporal modeling by inflating weights and structures [8, 13-15, 28, 34, 41], while the other is to directly utilize the image model as the + +backbone and insert plug-and-play modules for temporal modeling [37, 42, 60, 61, 77]. Following the success of new training paradigms in image understanding, several works have attempted to learn transferable video representations via self-supervised learning [43, 56, 59, 63] or multi-modal video-text pre-training [29, 30, 45, 62]. However, the above methods usually require full fine-tuning of the entire model or training from scratch, resulting in high training costs and additional computational overhead. In this work, we avoid the above problems by adapting the pre-trained image transformers to video tasks in an efficient manner. + +Parameter-efficient transfer learning To address the issue of training inefficiency caused by the continuous growth of model size, Parameter-efficient transfer learning (PETL) is initially introduced in NLP [21, 22, 26, 31, 49, 50, 72] and subsequently applied to vision tasks [9, 20, 23, 36, 46, 68, 69, 78]. These techniques aim to achieve comparable or even superior performance on other tasks by fine-tuning only a small subset of trainable parameters. Most PETL methods [9, 20, 23, 36, 46, 76, 78] in vision domain are limited to transfer within the same modality (e.g., image-to-image or video-to-video). In contrast, our research focuses on image-to-video transfer learning. Despite progress made by recent studies [38, 48, 71], these methods require additional computation and parameters for temporal modeling of video tasks and image-to-video adaptation. For example, AVL [38] incorporates an additional temporal transformer decoder, while ST-Adapter [48] introduces additional adapters with depth-wise 3D convolution layers. Similarly, AIM [71] adds extra adapters and necessitates an additional time attention calculation at each block. In contrast to previous works, our proposed method eschews the introduction of additional computation or parameters during inference, yet still achieves comparable or superior performance compared to previous methods. + +# 3 Methodology + +In this section, we first briefly revisit the basic block of ViT (Sec. 3.1), and then discuss how to utilize the flexibility of self-attention to achieve temporal modeling without introducing additional computation and parameters (Sec. 3.2). Finally, we explain how we implement zero-cost image-to-video adaptation with a serial linear structure (Sec. 3.3). + +# 3.1 Preliminary + +The original ViT [12] block consists of two network layers: multi-head self-attention (MHSA) and multi-layer perceptron (MLP). As shown in Figure 1, a ViT block consists of MHSA and MLP connected in series in a residual structure: + +$$ +z _ {l} = x _ {l} + \operatorname {M H S A} (\ln (x _ {l})), \tag {1} +$$ + +$$ +x _ {l + 1} = z _ {l} + \operatorname {M L P} (\ln (z _ {l})), \tag {2} +$$ + +![](images/524b5aa9d19533adeb59ad91e6c63388c164c54a738242c8ea1e3c4964d9ebbe.jpg) + +![](images/f741fddc09ed19e5387c109fd781d3b0371b69bf92493ebdc486d10a532963b3.jpg) +(a) Layer merging via reparameterization +(b) Spatial-temporal dual-headed attention +Fig. 2: Illustration of the proposed linear adaptation and STDHA. + +where LN denotes layer normalization [2] and $x_{l}$ represents the input to the $l$ -th ViT block. We review their specific implementation details. For the sake of simplicity, we ignore the bias and denote $X \in \mathbb{R}^{n \times d}$ as input of MHSA and MLP. + +MHSA first performs three different linear projections $W_{\mathrm{attn}}^{Q}, W_{\mathrm{attn}}^{K}, W_{\mathrm{attn}}^{V} \in \mathbb{R}^{d \times d}$ on the input $X$ to obtain the query $Q$ and key-value pairs $K, V$ . These are then evenly divided into $h$ heads by channel. Each head independently performs the scaled dot-product attention calculation. Finally, the heads are concatenated by channel and then a linear projection $W_{\mathrm{attn}}^{O} \in \mathbb{R}^{d \times d}$ is performed to obtain the final calculation result: + +$$ +Q, K, V = X W _ {\mathrm {a t t n}} ^ {Q}, X W _ {\mathrm {a t t n}} ^ {K}, X W _ {\mathrm {a t t n}} ^ {V}, \tag {3} +$$ + +$$ +\operatorname {h e a d} _ {i} = \operatorname {A t t e n t i o n} \left(Q _ {i}, K _ {i}, V _ {i}\right), \tag {4} +$$ + +$$ +\operatorname {M H S A} (X) = \operatorname {C o n c a t} \left(\operatorname {h e a d} _ {1}, \dots , \operatorname {h e a d} _ {h}\right) W _ {\mathrm {a t t n}} ^ {O}. \tag {5} +$$ + +MLP involves two linear projections $W_{\mathrm{mlp}}^{\mathrm{up}} \in \mathbb{R}^{d \times d'}$ , $W_{\mathrm{mlp}}^{\mathrm{down}} \in \mathbb{R}^{d' \times d}$ , $d' > d$ and one non-linear activation function $\sigma$ : + +$$ +\operatorname {M L P} (X) = \sigma \left(X W _ {\mathrm {m l p}} ^ {\mathrm {u p}}\right) W _ {\mathrm {m l p}} ^ {\mathrm {d o w n}}. \tag {6} +$$ + +# 3.2 Zero-Cost temporal modeling + +Applying image models to video tasks often requires the incorporation of additional modules for temporal modeling, which not only introduces additional parameters and computation, but also results in additional training costs. In this work, we address temporal modeling from three key perspectives: (1) Capability of capturing the temporal dynamics. (2) Reducing the difficulty of image-to-video adaptation. (3) Minimizing the introduction of additional computation and parameters compared to the original model. [44] suggests that most heads are redundant given the rest of the model. Inspired by this, we attempt to reassign some heads as temporal heads in the multi-head attention to perform temporal + +modeling tasks, while the remaining heads continue to perform spatial modeling tasks as spatial heads, thereby achieving efficient spatial-temporal modeling. + +Spatial-temporal dual-headed attention (STDHA) As shown in Figure 2b, consider an input sequence $X = \{x_{1}, x_{2}, \dots, x_{T}\}$ where $x_{t} \in \mathbb{R}^{n \times d}$ . Let the query and key-value pairs obtained after the linear projection of the $x_{t}$ be $Q^{t}, K^{t}, V^{t} \in \mathbb{R}^{n \times d}$ . We divide the $h$ heads of the MHSA into two groups of size $h - k$ and $k$ . One group of heads queries the key-value pairs at the current time $t$ to perform spatial modeling, while the other group of heads queries the key-value pairs at other times $t + \Delta t_{i}$ to perform temporal modeling. Finally, the information from the two groups of heads is aggregated by a linear projection to perform spatial-temporal modeling: + +$$ +\text {S - h e a d} _ {i} = \text {A t t e n t i o n} \left(Q _ {i} ^ {t}, K _ {i} ^ {t}, V _ {i} ^ {t}\right), \tag {7} +$$ + +$$ +\text {T - h e a d} _ {i} = \operatorname {A t t e n t i o n} \left(Q _ {i} ^ {t}, K _ {i} ^ {t + \Delta t _ {i}}, V _ {i} ^ {t + \Delta t _ {i}}\right) (\Delta t _ {i} \neq 0), \tag {8} +$$ + +$$ +\operatorname {S T D H A} (X) = \operatorname {C o n c a t} (\mathrm {T} - \text {h e a d} _ {1}, \dots , \mathrm {T} - \text {h e a d} _ {k}, \mathrm {S} - \text {h e a d} _ {k + 1} \dots \mathrm {S} - \text {h e a d} _ {h}) W _ {\text {a t t n}} ^ {O}, \tag {9} +$$ + +where $\Delta t_{i}$ represents the time offset of the key-value pair of the $i$ -th head. We did not directly use temporal attention or temporal convolution for the temporal modeling like previous works [38, 48, 71]. Instead, we design a more efficient spatiotemporal modeling operator by decoupling spatial modeling and temporal modeling to different heads: + +- For the spatial head, it still only needs to complete the spatial modeling task as the original image transformer, which reduces the difficulty of achieving image-to-video adaptation. +- For the temporal head, it actually implements the inter-frame attention mechanism with frames at different times. [74] have demonstrated the effectiveness of an inter-frame attention mechanism for modeling motion information, which is crucial for action recognition tasks. In addition, as shown in Table 1c, we can achieve both short-distance and long-distance modeling by controlling the $\Delta t_{i}$ of the temporal head, which enables us to achieve enhanced temporal modeling capabilities. + +Comparison with other zero-cost operators There have been several previous attempts [6, 66, 75] to use image transformers to achieve efficient temporal modeling at zero parameters and zero computation. For example, [6] achieves approximations to full space-time attention by mixing tokens from adjacent frames. [75] performs temporal modeling by using channel shift on thecls tokens of different frames. [66] mixes information from adjacent frames using temporal patch shift and temporal channel shift before MHSA. However, these methods do not take advantage of the inherent characteristics of the transformer structure. By decoupling the learning of spatial and temporal information with head relocation, STDHA maintains the purity of key-value pair information within the same head, thereby achieving better spatial-temporal information learning than other zero-cost temporal modules. And STDHA simultaneously captures both short-range and long-range dependencies, rather than being limited to + +adjacent frames. As shown in Table 1, these two key distinctions enable our STDHA to achieve superior spatial-temporal modeling. + +# 3.3 Zero Extra Inference Cost image-to-video adaptation + +Inspired by LoRA [22], we can fine-tune the model using a linear structure and then merge it with the original model during inference. However, to deal with the domain gap between images and videos, previous works [38,48,71] often use nonlinear structures to achieve stronger transfer capabilities. Therefore, we need to further consider how to achieve effective image-to-video transfer using only a linear structure. + +Layer merging via structural reparameterization Let $W_{\mathrm{old}}$ represent the frozen weights of the original model, and $W_{\mathrm{new}}$ represent the new trainable weights. Reviewing the structure of LoRA, it uses a low-rank decomposition matrix $W_{\mathrm{LoRA}}$ parallel to the original weights: + +$$ +W _ {\text {n e w}} = W _ {\text {L o R A}} + W _ {\text {o l d}} = W _ {\text {u p}} W _ {\text {d o w n}} + W _ {\text {o l d}}. \tag {10} +$$ + +In this work, we use a serial linear structure called Linear Adapter to fine-tune the original parameters. As shown in Figure 2a, we use structural reparameterization to perform layer merging after training: + +$$ +W _ {\text {n e w}} = W _ {\text {A d a p t e r}} W _ {\text {o l d}} = \left(I + W _ {\text {u p}} W _ {\text {d o w n}}\right) W _ {\text {o l d}}, \tag {11} +$$ + +where $I$ is the identity matrix, $W_{\mathrm{up}} \in \mathbb{R}^{m \times k}$ , $W_{\mathrm{down}} \in \mathbb{R}^{k \times n}$ , bottleneck width $k \ll \min(m, n)$ . As seen in Table 2, compared to parallel structures, serial structures can be more flexibly inserted into the network structure (e.g., for non-square matrices, under the same bottleneck dimension, using LoRA requires a larger number of parameters compared to Linear Adapter), which endows it with better transfer capabilities. + +Full adaptation with densely placed linear adapters By observing the structure of MHSA and MLP, we can see that all their trainable parameters concentrate on the linear projections at both ends of the structure. Therefore, fine-tuning the model essentially updates these linear projections. Previous works [48, 71] often selectively tune part of the parameters (e.g., placing only an adapter before MHSA) instead of tuning all parameters to avoid excessive additional computational and parameter costs, while we can achieve zero-cost full adaptation by tuning all parameters through wrapping MHSA and MLP with linear adapters. Table 2 shows that full adaptation enables us to achieve excellent image-to-video transfer performance with a linear structure, compensating for the performance degradation caused by the removal of nonlinearity. + +# 4 Experiments + +# 4.1 Experiments setup + +We evaluate our method on five widely-used video recognition benchmarks: two large-scale datasets, namely Kinetics-400 (K400) [8] and Something-Something V2 + +Table 1: Ablation study on STDHA. Most of the symbols in the table have been declared in the methodology section 3. (a) $R_{c}$ denotes channel change ratio, "Shift" refers to temporal channel shift, while "HR" denotes head relocation as used by STDHA. (b) We use a multiset to represent the time offsets of different heads (e.g., "1·2" means that there are 2 heads with $\Delta t = 1$ ). When $\Delta t = 0$ , it represents a spatial head. (c) "Temporal RF" refers to the temporal receptive field of a single STDHA. + +
RcMethodTop-1
1/6[cls] token shift61.4
Shift QKV64.5
Shift KV64.6
HR QKV64.8
HR KV (STDHA)66.0
1/4Shift KV64.0
HR KV (STDHA)65.8
+ +(a) Compare temporal modeling methods + +
BackboneΔt of headskTop-1
ViT-B (h=12){1·1/2, -1·1/2, 0·11}164.8
{1·1, -1·1, 0·10}266.0
{1·2, -1·2, 0·8}465.6
{1·3, -1·3, 0·6}665.6
ViT-L (h=16){1·1, -1·1, 0·14}267.7
{1·2, -1·2, 0·12}468.5
{1·3, -1·3, 0·10}668.3
+ +(b) Effect of the temporal head number + +
FramesΔt of headsTemporal RFTop-1
8{1·1,0·11}264.7
{1·1,-1·1,0·10}366.0
{1·1,-1·1,2·1,0·9}465.5
{1·1,-1·1,2·1,-2·1,0·8}565.7
16{1·1,-1·1,0·10}367.2
{1·1,-1·1,2·1,0·9}467.3
{1·1,-1·1,2·1,-2·1,0·8}567.8
{1·1,-1·1,2·1,-2·1,3·1,0·7}667.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}767.3
32{1·1,-1·1,0·10}367.3
{1·1,-1·1,2·1,0·9}467.8
{1·1,-1·1,2·1,-2·1,0·8}568.5
{1·1,-1·1,2·1,-2·1,3·1,0·7}668.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}768.4
{1·1,-1·1,2·1,-2·1,3·1,-3·1,4·1,0·5}868.2
+ +(c) Effect of the temporal receptive field at different input lengths. + +(SSv2) [16], in addition to three smaller-scale datasets, UCF101 [54], HMDB51 [25] and Diving48 [35]. We also evaluate our method on action detection dataset AVA [17]. This diverse dataset selection allows for a comprehensive evaluation of our model across various scales and domains. The specific model configuration and training strategy can be found in the supplementary. For most main experiments, we use ViT-B and ViT-L pre-trained by CLIP [51] as our backbone models. + +# 4.2 Ablation study + +To validate the effectiveness of our method on image-to-video transfer and temporal modeling, we first conduct ablation experiments on the SSv2 dataset. All ablation experiments were performed using ViT-B/16 with 8 input frames unless specified. + +Effectiveness of STDHA Table 1a compares STDHA with other zero-cost temporal modeling methods. The [cls] token shift is implemented according to the original paper [75], with [cls] token shift performed before MHSA and MLP. + +Table 2: Comparison of adaption strategies. "Width" refers to the bottleneck width of LoRA/Adapter. "Tunable Params" refers to extra trainable parameters besides the parameters of the ViT backbone and linear classifier. " $\checkmark$ " and " $\times$ " indicate whether the corresponding weights have undergone fine-tuning, and " $\checkmark$ " indicates that $W_{\mathrm{attn}}^{Q}$ , $W_{\mathrm{attn}}^{K}$ and $W_{\mathrm{attn}}^{V}$ share the same adapter. "Latency" refers to inference latency with 3 samples. All results are obtained using the same V100-32G with PyTorch-built mixed precision. + +
MethodWeights of ViT blockTunable +Params(M)Bottleneck +WidthLatencySSv2 +(ms)Top-1
WQattnWKattnWVattnWOattnWupmlpWdownmlp
Full Fine-tuning86-28.963.2
Linear ProbeXXXXXX0-28.920.0
Only tuning temporal headXX4.6-28.959.6
ST-Adapter [48]1419241.066.2
XX1438438.865.8
LoRA [22]XXXX719264.2
XX1419265.0
XX2519264.3
XX1712828.965.6
3219265.0
2112865.5
Adapter w/ GELU79637.365.6
XX719234.964.6
X1019236.366.1
1419238.466.1
Linear Adapter (Ours)79665.0
XX719264.4
X1019228.965.2
1419266.0
2019266.3
1412866.2
+ +The temporal channel shift operation refers to TPS [66], which shifts a portion of the channels for each head. It can be seen that STDHA significantly outperforms other methods at the same channel change ratio, demonstrating the importance of preserving the purity of information within each head. + +Effect of the number of temporal heads and temporal receptive field We examined the influence of the number of temporal heads and the temporal receptive field in ViT-B and ViT-L. Our findings, detailed in Tables 1b and 1c, suggest that the optimal proportion of temporal heads in ViT lies between $1/6$ and $1/4$ . For the temporal receptive field, our results indicate that for 8-frame inputs, a field of 3 is sufficient, while for longer inputs (16/32 frames), performance improves with an increase in the field from 3, saturating at around 5 or 6. Hence, we employ different STDHA configurations based on input length. + +Comparison of adaptation strategies In Table 2, we compare the image-to-video transfer ability of our method with a diverse range of adaptation methods. For a fair comparison, we all use STDHA with the same setting to provide temporal modeling capabilities. From the results, we can observe that: + +Table 3: Results on Kinetics-400 validation set. Views = #frames × #spatial crops × #temporal clips. "GFLOPs" means $10^{9}$ FLOPs, "M" means $10^{6}$ . "Extra GLOPs" refers to the extra computation added to the original ViT under the same number of views. "New Params" refers to additional parameters during inference besides the parameters of the original ViT backbone and linear classifier. + +
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
UniFormer-B [28]IN1K32×3×43108-50-83.095.4
TimeSformer-L [4]IN21K96×3×17140-121-80.794.7
VideoSwin-L [41]IN21K32×3×47248-197-83.195.9
MViTv2-L(↑312) [34]IN21K40×5×342420-218-86.197.0
ViViT-L/16x2 FE [1]JFT32×3×111940-311-83.594.3
MTV-L [70]JFT32×3×418050-876-84.396.3
ViT-B/16 [48]CLIP8×1×3422086081.095.5
ActionCLIP-B/16 [62]CLIP32×3×1016893131425683.897.1
X-CLIP ViT-L/14 [45]CLIP8×3×4789610742011687.197.6
Text4Vis ViT-L/14 [65]CLIP32×3×419944-3474387.197.4
Methods with PETL
VideoPrompt ViT-B/16 [24]CLIP16×5×1----76.993.5
ST-Adapter ViT-B/16 [48]IN21K8×1×34553393776.6-
ST-Adapter ViT-L/14 [48]CLIP32×1×382483221987.297.6
EVL ViT-B/16 [38]IN21K8×1×3454321152975.4-
EVL ViT-L/14 [38]CLIP8×1×32022763625886.3-
AIM ViT-B-14 [71]IN21K8×1×36242021001478.8-
AIM ViT-L/14 [71]CLIP32×1×31120834253413887.597.7
Zeroi2V ViT-B/16IN21K8×1×3422086078.6-
Zeroi2V ViT-B/16CLIP8×1×3422086083.095.8
Zeroi2V ViT-B/16CLIP16×1×3844086083.496.2
Zeroi2V ViT-B/16CLIP32×1×31688086083.796.4
Zeroi2V ViT-L/14CLIP8×1×319460304086.397.4
Zeroi2V ViT-L/14CLIP16×1×338920304086.897.6
Zeroi2V ViT-L/14CLIP32×1×377830304087.297.6
+ +- Even with minimal parameters being fine-tuned, our Linear Adapter significantly outperforms full fine-tuning (66.3 vs 63.2). Despite updating the fewest parameters, the linear probe performs poorly in image-to-video transfer. +- Tuning only the temporal head achieves about $95\%$ of the full fine-tuning performance, suggesting that extensive fine-tuning of the spatial head may not be necessary to attain satisfactory transfer performance due to the decoupling of spatial and temporal modeling reduces the difficulty of adaptation. +- Our Full Adaptation strategy is not only effective for linear adapters, but also for non-linear adapters such as the ST-Adapter and GELU Adapter. It not only enhances their adaptation performance, but also eliminates the performance gap between linear and non-linear structures. +- Due to the inflexibility of the parallel structure, for non-square matrices like $W_{\mathrm{mlp}}$ , LoRA requires more parameters under the same bottleneck width. It needs to decrease the bottleneck width of the low-rank matrix to align it with the number of parameters of the linear adapter. However, this reduction in bottleneck width can limit its adaptation ability, ultimately leading to results that are significantly worse than those of the Linear Adapter. + +Table 4: Results on Something-Something v2 validation set. $\dagger$ indicates that the model is pre-trained on both IN21K (except for Uniformer [28] which uses IN1K) and K400/K600. Other notations are the same as Table 3. + +
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
TimeFormer-L [4]IN21K64×3×17140-121-62.4-
ViViT-L [1]K400†16×3×411892-311-65.489.8
MTV-B(↑320) [70]K400†32×3×411160-310-68.590.4
VideoSwin-B [41]K400†32×3×1963-89-69.692.7
MViTv2-L(↑312) [34]K400†40×3×18484-213-73.394.1
UniFormer-B [28]K600†32×3×1777-50-71.292.8
ViT-L/14 [12]CLIP8×3×119460304048.777.5
ILA ViT-L/14 [58]CLIP8×3×410884310052922567.890.5
Methods with PETL
ST-Adapter ViT-B/16 [48]IN21K8×3×14553393762.8-
ST-Adapter ViT-B/16 [48]CLIP32×3×119552671001469.592.6
EVL ViT-L/14 [38]CLIP32×3×19641185847917566.7-
AIM ViT-B/16IN21K8×3×16242021001462.0-
AIM ViT-L/14 [71]CLIP32×3×11150837253545070.692.7
ZeroI2V ViT-B/16IN21K8×3×1422086065.3-
ZeroI2V ViT-B/16CLIP8×3×1422086067.790.8
ZeroI2V ViT-B/16CLIP16×3×1844086069.491.7
ZeroI2V ViT-B/16CLIP32×3×11688086070.192.4
ZeroI2V ViT-L/14CLIP8×3×119460304070.191.8
ZeroI2V ViT-L/14CLIP16×3×138920304071.493.0
ZeroI2V ViT-L/14CLIP32×3×177830304072.293.0
+ +# 4.3 Fully-supervised Experiments + +Results on K400 As shown in Table 3, our method has significant advantages over traditional full fine-tuning methods, achieving better performance with much lower computational cost. For example, our ZeroI2V ViT-L/14 with an input of 8 frames outperforms MViTv2 [34] (86.3 vs 86.1), while requiring more than 20 times fewer GFLOPs (1946 vs 42420). Compared to multi-modal methods such as ActionCLIP [62] and X-CLIP [45], which require an additional text branch and fine-tune the entire model end-to-end, our ZeroI2V can achieve comparable performance using only the visual encoder. Moreover, although our proposed ZeroI2V doesn't increase computational or parameter costs during inference compared with the previous PETL method, it can still achieve similar or even better performance. For example, on ViT-B/16, ZeroI2V with an input of 8 frames can surpass ST-Adapter [48] with an input of 32 frames (83.0 vs 82.7) with much lower GFLOPs (422 vs 1821). On ViT-L/14, ZeroI2V achieves the same performance as EVL [38], which requires an additional 58M parameters. And ZeroI2V achieves comparable performance to AIM [71] (87.2 vs 87.5) with a nearly $30\%$ reduction in GFLOPs (7783 vs 11208). + +Results on SSv2 As shown in Table 4, thanks to the effectiveness of STDHA in temporal modeling, our method outperforms most full fine-tuning methods, even though many of them have been pre-trained on the Kinetics dataset. Our ZeroI2V has a significant improvement compared to directly full fine-tuning ViT-L/16 pre-trained with CLIP (70.1 vs 48.7) with the same number of parameters + +Table 5: Comparing the state-of-the-art video recognition methods on UCF101, HMDB51 and Diving48. For UCF101 and HMDB51, we test our method and report the 3-split mean Top-1 accuracy for both datasets following ST-Adapter [48]. And for Diving48, we test our method with 1 temporal clip following AIM [71]. + +
MethodPretrainUCF101HMDB51Diving48
Methods with full fine-tuning
I3D [8]ImageNet+K40095.674.8-
S3D [67]ImageNet+K40096.875.9-
SlowOnly-8x8-R101 [15]Kinetics+OmniSource97.379.0-
TimeSformer-L [4]IN21K--81.0
VideoSwin-B [41]IN21K--81.9
Methods with PETL
VideoPrompt [24]CLIP93.666.4-
AIM ViT-B/16 [71]CLIP--88.9
AIM ViT-L/14 [71]CLIP--90.6
ST-Adapter ViT-B/16 [48]CLIP+K40096.477.7-
ST-Adapter ViT-L/14 [48]CLIP+K40098.181.7-
ZeroI2V ViT-B/16CLIP95.673.789.7
ZeroI2V ViT-B/16CLIP+K40097.778.5-
ZeroI2V ViT-L/14CLIP97.879.991.4
ZeroI2V ViT-L/14CLIP+K40098.683.4-
+ +Table 6: Comparing the SoTA action detection methods on AVA 2.2. + +
MethodPretrainFrozen BackboneFramesmAP
SlowFast-R101 [15]K400823.8
MViTv2-B [34]K4003228.1
VideoMAE-B [56]K4001631.8
VideoMAE-B [56]K400 wo/ labels1626.7
CLIP ViT-B/16CLIP818.3
ZeroI2V ViT-B/16CLIP826.4
+ +and computation. Compared to other PETL methods, ZeroI2V outperforms ST-Adapter [48] on ViT-B/16 (70.1 vs 69.5) with lower GFLOPs (1688 vs 1955). Additionally, ZeroI2V significantly surpasses both AVL [38] and AIM [71] (71.4 vs 66.7, 70.6) on ViT-L/14 with much lower GFLOPs (3892 vs 9641, 11508) and new parameters (0M vs 175M, 50M). + +Results on smaller datasets As shown in Table 5, on three relatively small datasets, our method achieves state-of-the-art performance on UCF101, HMDB51, and Diving48. This demonstrates a clear performance advantage over both full-finetuning methods and PETL methods previously. + +Results on action detection In addition to the task of action recognition, to understand the capability of our method in fine-grained spatial understanding, we also evaluate our method on action detection dataset AVA [17]. Following the setting of VideoMAE [56], we evaluate the top 60 common classes using the mean Average Precision (mAP) as the metric under an IoU threshold of 0.5. As shown in Table 6, compared to using the original image CLIP features, our ZeroI2V achieved a significant performance improvement (26.4 vs 18.3) with the same number of parameters and computation. It's noteworthy that our method was not + +Table 7: Comparing the SoTA video recognition methods on the VidTAB [32]. + +
# Pretrain DataAvgActionScienceSafetyQualityEmotion
DS LVMS ABHC FFQAEA
CLIP ViT-L/14 [51]CLIP42.831.2 38.032.3 36.350.3 58.567.728.1
ViCLIP ViT-L/14 [64]CLIP+InternVid200M42.736.7 43.930.2 36.846.9 54.865.427.2
ST-Adapter ViT-L/14 [48]CLIP46.943.0 45.031.2 39.449.4 64.972.329.9
ZeroI2V ViT-L/14CLIP46.541.3 46.831.2 39.347.2 64.670.630.6
+ +Table 8: Inference latency and throughput. All results are obtained using the same V100-32G with PyTorch-built mixed precision, using a batch size of 1 to measure latency and the optimal possible batch size to measure throughput before out of memory. + +
ModelViewsGFLOPsLatency (ms)Throughput (V/s)K400 (Top-1)SSv2 (Top-1)
Uniformer-B [28]32×41036245.384.2482.9-
EVL ViT-B/16 [38]8×345453.8724.0482.961.0
ViT-B/16 [12]8×342228.7240.0881.044.0
Zerol2V ViT-B/168×342228.8940.0883.067.7
+ +pre-trained on action recognition datasets such as Kinetics. Instead, we directly applied image-to-video transfer on the AVA dataset. Remarkably, our method still managed to achieve performance on par with full-finetuning methods and self-supervised methods that underwent pre-training using the Kinetics dataset, even when using only 8 frames as input. In summary, our ZeroI2V demonstrates outstanding potential in video tasks beyond recognition. + +# 4.4 Few-shot Experiments + +To demonstrate the adaptation capability of our method in few-shot scenarios, we conduct experiments on the Video Task Adaptation Benchmark (VidTAB). As show in Table 7 The results show that our method can effectively enhance the adaptation of the image model to video tasks using only a few samples. Compared to ST-Adapter [48], our approach achieves comparable results while enjoying the advantage of parameter and inference efficiency. + +# 4.5 Efficiency analysis + +Comparison of inference efficiency We compared the inference efficiency of our method with other methods on the same hardware device. As shown in Table 8, under comparable accuracy, the throughput of our method is 10 times that of Uniformer [28], Compared to the original ViT-B, our method introduces negligible additional latency during inference while achieving superior performance. In comparison with AVL [38], it can also be seen that the impact of the additional computational module on the actual runtime latency (28.89 ms vs 53.87 ms) is greater than that reflected by GFLOPs (422 vs 454). + +Table 9: Comparison of training cost. Our results are obtained using the same V100-32G with PyTorch-built mixed precision, following AVL [38]. "†" indicates that the epoch is estimated based on the batch size and training steps of the original paper. "Memory" refers to the GPU memory usage when the batch size is 8. + +
Model (Frames)DatasetTraining EpochsTraining GPU HoursTunable Param (M)Memory (G)Top-1
Uniformer-B [28] (32)K4001105000 × V10050-82.9
ActionCLIP ViT-B/16 [62] (16)K40050480 × RTX3090142-82.6
EVL ViT-B/16 [38] (8)K40053†60 × V100292.282.9
SSv246†75 × V100985.661.0
ST-Adapter ViT-B/16 [48] (8)K40011†23 × V10076.982.0
SSv238†60 × V100147.667.1
AIM ViT-B/16 [71] (8)K40030120 × V100118.783.9
SSv250150 × V100149.066.4
ZeroI2V ViT-B/16 (8)K40040100 × V100147.683.0
SSv25090 × V100147.667.3
+ +Comparison of training cost We compared the training cost of our method with previous methods in Table 9. It can be seen that compared to previous full fine-tuning methods such as Uniformer [28] and ActionCLIP [62], our method significantly reduces training cost. Compared to the previous PETL method, our method does not have a significant advantage in training efficiency due to the use of dense adapters. AVL [38], which does not need to insert adapters into the frozen backbone, avoids some of the cost of backpropagation and therefore has lower memory usage. ST-Adapter [48], due to its fewer trainable parameters, has a faster convergence speed, but its memory usage is close to our method. Nonetheless, in contrast to AIM [71] that imposes an additional computational burden for temporal modeling, our STDHA method, which does not introduce extra learnable parameters, ensures that ZeroI2V maintains superior training efficiency. We believe that it is worthwhile and acceptable to exchange some training costs for a reduction in inference costs. + +# 5 Conclusions + +In this work, we present a new approach for parameter-efficient image-to-video transfer learning, called ZeroI2V. By fully leveraging the powerful representational capabilities of pre-trained image models, our approach enables image transformers to perform video tasks without introducing extra costs during inferences. Our proposed STDHA achieves efficient spatial-temporal modeling at zero extra computation and parameters. In addition, through structural reparameterization and full adaptation strategies, we successfully use a linear structure to achieve zero extra inference cost image-to-video adaptation for the first time. ZeroI2V shows strong performance compared to previous full fine-tuning and PETL methods on widely used video understanding benchmarks while maintaining parameter and inference efficiency. Due to the simplicity and versatility of our method, we believe it can be easily extended to other video tasks and even multi-modal understanding tasks. We will further investigate this direction in future work. + +Acknowledgements. This work is supported by the National Key R&D Program of China (No. 2022ZD0160900), the National Natural Science Foundation of China (No. 62076119, No. 61921006), the Fundamental Research Funds for the Central Universities (No. 020214380119), and the Collaborative Innovation Center of Novel Software Technology and Industrialization. + +# References + +1. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lucic, M., Schmid, C.: Vivit: A video vision transformer. In: Int. Conf. Comput. Vis. pp. 6816-6826 (2021) +2. Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016) +3. Bao, H., Dong, L., Piao, S., Wei, F.: Beit: BERT pre-training of image transformers. In: Int. Conf. Learn. Represent. (2022) +4. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? In: Int. Conf. Mach. Learn. vol. 139, pp. 813-824 (2021) +5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: Adv. Neural Inform. Process. Syst. vol. 33, pp. 1877-1901 (2020) +6. Bulat, A., Pérez-Rúa, J., Sudhakaran, S., Martínez, B., Tzimiropoulos, G.: Spacetime mixing attention for video transformer. In: Adv. Neural Inform. Process. Syst. pp. 19594-19607 (2021) +7. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: Int. Conf. Comput. Vis. pp. 9630-9640 (2021) +8. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4724-4733 (2017) +9. Chen, S., Ge, C., Tong, Z., Wang, J., Song, Y., Wang, J., Luo, P.: Adaptformer: Adapting vision transformers for scalable visual recognition. In: Adv. Neural Inform. Process. Syst. (2022) +0. Cherti, M., Beaumont, R., Wightman, R., Wortsman, M., Ilharco, G., Gordon, C., Schuhmann, C., Schmidt, L., Jitsev, J.: Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143 (2022) +1. Devlin, J., Chang, M., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT. pp. 4171-4186 (2019) +2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Int. Conf. Learn. Represent. (2021) +3. Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscale vision transformers. In: Int. Conf. Comput. Vis. pp. 6804-6815 (2021) +4. Feichtenhofer, C.: X3D: expanding architectures for efficient video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 200-210 (2020) +5. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Int. Conf. Comput. Vis. pp. 6201-6210 (2019) +6. Goyal, R., Kahou, S.E., Michalski, V., Materzynska, J., Westphal, S., Kim, H., Haenel, V., Fründ, I., Yianilos, P., Mueller-Freitag, M., Hoppe, F., Thurau, C., Bax, I., Memisevic, R.: The "something something" video database for learning + +and evaluating visual common sense. In: Int. Conf. Comput. Vis. pp. 5843-5851. IEEE Computer Society (2017) +17. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 6047-6056 (2018) +18. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.B.: Masked autoencoders are scalable vision learners. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 15979-15988 (2022) +19. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 9726-9735 (2020) +20. He, X., Li, C., Zhang, P., Yang, J., Wang, X.E.: Parameter-efficient model adaptation for vision transformers. arXiv preprint arXiv:2203.16329 (2022) +21. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., de Laroussilhe, Q., Gesmundo, A., Attariyan, M., Gelly, S.: Parameter-efficient transfer learning for NLP. In: Int. Conf. Mach. Learn. vol. 97, pp. 2790-2799 (2019) +22. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: Lora: Low-rank adaptation of large language models. In: Int. Conf. Learn. Represent. (2022) +23. Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: Eur. Conf. Comput. Vis. pp. 709-727 (2022) +24. Ju, C., Han, T., Zheng, K., Zhang, Y., Xie, W.: Prompting visual-language models for efficient video understanding. In: Eur. Conf. Comput. Vis. pp. 105-124. Springer (2022) +25. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: a large video database for human motion recognition. In: Int. Conf. Comput. Vis. pp. 2556-2563. IEEE (2011) +26. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045-3059 (2021) +27. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Int. Conf. Mach. Learn. vol. 162, pp. 12888-12900 (2022) +28. Li, K., Wang, Y., Gao, P., Song, G., Liu, Y., Li, H., Qiao, Y.: Uniformer: Unified transformer for efficient spatial-temporal representation learning. In: Int. Conf. Learn. Represent. (2022) +29. Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Wang, L., Qiao, Y.: Uniformerv2: Unlocking the potential of image vits for video understanding. In: Int. Conf. Comput. Vis. pp. 1632-1643 (2023) +30. Li, T., Wang, L.: Learning spatiotemporal features via video and text pair discrimination. arXiv preprint arXiv:2001.05691 (2020) +31. Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582-4597 (2021) +32. Li, X., Huang, Z., Wang, J., Li, K., Wang, L.: Videoeval: Comprehensive benchmark suite for low-cost evaluation of video foundation model. arXiv preprint arXiv:2407.06491 (2024) + +33. Li, Y., Ji, B., Shi, X., Zhang, J., Kang, B., Wang, L.: TEA: temporal excitation and aggregation for action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 906-915 (2020) +34. Li, Y., Wu, C., Fan, H., Mangalam, K., Xiong, B., Malik, J., Feichtenhofer, C.: Mvitv2: Improved multiscale vision transformers for classification and detection. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4794-4804 (2022) +35. Li, Y., Li, Y., Vasconcelos, N.: Resound: Towards action recognition without representation bias. In: Eur. Conf. Comput. Vis. pp. 513-528 (2018) +36. Lian, D., Zhou, D., Feng, J., Wang, X.: Scaling & shifting your features: A new baseline for efficient model tuning. In: Adv. Neural Inform. Process. Syst. (2022) +37. Lin, J., Gan, C., Wang, K., Han, S.: TSM: temporal shift module for efficient and scalable video understanding on edge devices. IEEE Trans. Pattern Anal. Mach. Intell. 44(5), 2760-2774 (2022) +38. Lin, Z., Geng, S., Zhang, R., Gao, P., de Melo, G., Wang, X., Dai, J., Qiao, Y., Li, H.: Frozen CLIP models are efficient video learners. In: Eur. Conf. Comput. Vis. vol. 13695, pp. 388-404 (2022) +39. Liu, M., Wang, Z., Ji, S.: Non-local graph neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 10270-10276 (2022) +40. Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., Guo, B.: Swin transformer V2: scaling up capacity and resolution. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 11999-12009 (2022) +41. Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., Hu, H.: Video swim transformer. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3192-3201 (2022) +42. Liu, Z., Wang, L., Wu, W., Qian, C., Lu, T.: TAM: temporal adaptive module for video recognition. In: Int. Conf. Comput. Vis. pp. 13688-13698 (2021) +43. Lu, C., Jin, X., Huang, Z., Hou, Q., Cheng, M., Feng, J.: CMAE-V: contrastive masked autoencoders for video action recognition. arXiv preprint arXiv:2301.06018 (2023) +44. Michel, P., Levy, O., Neubig, G.: Are sixteen heads really better than one? In: Adv. Neural Inform. Process. Syst. pp. 14014-14024 (2019) +45. Ni, B., Peng, H., Chen, M., Zhang, S., Meng, G., Fu, J., Xiang, S., Ling, H.: Expanding language-image pretrained models for general video recognition. In: Eur. Conf. Comput. Vis. vol. 13664, pp. 1-18 (2022) +46. Nie, X., Ni, B., Chang, J., Meng, G., Huo, C., Zhang, Z., Xiang, S., Tian, Q., Pan, C.: Pro-tuning: Unified prompt tuning for vision tasks. arXiv preprint arXiv:2207.14381 (2022) +47. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., Assran, M., Ballas, N., Galuba, W., Howes, R., Huang, P., Li, S., Misra, I., Rabbat, M.G., Sharma, V., Synnaeve, G., Xu, H., Jégou, H., Mairal, J., Labatut, P., Joulin, A., Bojanowski, P.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023) +48. Pan, J., Lin, Z., Zhu, X., Shao, J., Li, H.: St-adapter: Parameter-efficient image-to-video transfer learning. In: Adv. Neural Inform. Process. Syst. (2022) +49. Pfeiffer, J., Kamath, A., Rückle, A., Cho, K., Gurevych, I.: Adapterfusion: Nondestructive task composition for transfer learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 487-503 (2021) +50. Pfeiffer, J., Rückle, A., Poth, C., Kamath, A., Vulic, I., Ruder, S., Cho, K., Gurevych, I.: Adapterhub: A framework for adapting transformers. In: Proceedings of the + +2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 46-54 (2020) +51. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Int. Conf. Mach. Learn. vol. 139, pp. 8748-8763 (2021) +52. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI blog (2018) +53. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) +54. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012) +55. Tan, J., Zhao, X., Shi, X., Kang, B., Wang, L.: Pointtad: Multi-label temporal action detection with learnable query points. NIPS 35, 15268-15280 (2022) +56. Tong, Z., Song, Y., Wang, J., Wang, L.: Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In: Adv. Neural Inform. Process. Syst. (2022) +57. Tschannen, M., Mustafa, B., Houlsby, N.: Clippo: Image-and-language understanding from pixels only. arXiv preprint arXiv:2212.08045 (2022) +58. Tu, S., Dai, Q., Wu, Z., Cheng, Z., Hu, H., Jiang, Y.: Implicit temporal modeling with learnable alignment for video recognition. In: Int. Conf. Comput. Vis. (2023) +59. Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., Qiao, Y.: Videomae V2: scaling video masked autoencoders with dual masking. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023) +60. Wang, L., Tong, Z., Ji, B., Wu, G.: TDN: temporal difference networks for efficient action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1895-1904 (2021) +61. Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Gool, L.V.: Temporal segment networks: Towards good practices for deep action recognition. In: Eur. Conf. Comput. Vis. vol. 9912, pp. 20-36 (2016) +62. Wang, M., Xing, J., Liu, Y.: Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021) +63. Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Jiang, Y., Zhou, L., Yuan, L.: BEVT: BERT pretraining of video transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 14713-14723 (2022) +64. Wang, Y., He, Y., Li, Y., Li, K., Yu, J., Ma, X., Li, X., Chen, G., Chen, X., Wang, Y., et al.: Intervid: A large-scale video-text dataset for multimodal understanding and generation. In: ICLR (2024) +65. Wu, W., Sun, Z., Ouyang, W.: Revisiting classifier: Transferring vision-language models for video recognition. In: AAAI Conf. Artif. Intell. pp. 2847-2855 (2023) +66. Xiang, W., Li, C., Wang, B., Wei, X., Hua, X., Zhang, L.: Spatiotemporal self-attention modeling with temporal patch shift for action recognition. In: Eur. Conf. Comput. Vis. vol. 13663, pp. 627-644 (2022) +67. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Eur. Conf. Comput. Vis. pp. 305–321 (2018) +68. Xu, C., Zhu, Y., Shen, H., Chen, B., Liao, Y., Chen, X., Wang, L.: Progressive visual prompt learning with contrastive feature re-formation. arXiv preprint arXiv:2304.08386 (2023) + +69. Xu, C., Zhu, Y., Zhang, G., Shen, H., Liao, Y., Chen, X., Wu, G., Wang, L.: Dpl: Decoupled prompt learning for vision-language models. arXiv preprint arXiv:2308.10061 (2023) +70. Yan, S., Xiong, X., Arnab, A., Lu, Z., Zhang, M., Sun, C., Schmid, C.: Multiview transformers for video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3323-3333 (2022) +71. Yang, T., Zhu, Y., Xie, Y., Zhang, A., Chen, C., Li, M.: Aim: Adapting image models for efficient video action recognition. In: Int. Conf. Learn. Represent. (2023) +72. Zaken, E.B., Goldberg, Y., Ravfogel, S.: Bitfit: Simple parameter-efficient fin-tuning for transformer-based masked language-models. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 1-9 (2022) +73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1204-1213 (2022) +74. Zhang, G., Zhu, Y., Wang, H., Chen, Y., Wu, G., Wang, L.: Extracting motion and appearance via inter-frame attention for efficient video frame interpolation. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023) +75. Zhang, H., Hao, Y., Ngo, C.: Token shift transformer for video classification. In: ACM Int. Conf. Multimedia. pp. 917-925 (2021) +76. Zhang, Y., Zhou, K., Liu, Z.: Neural prompt search. arXiv preprint arXiv:2206.04673 (2022) +77. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Eur. Conf. Comput. Vis. vol. 11205, pp. 831-846 (2018) +78. Zhu, Y., Ji, Y., Zhao, Z., Wu, G., Wang, L.: Awt: Transferring vision-language models via augmentation, weighting, and transportation. arXiv preprint arXiv:2407.04603 (2024) +79. Zhu, Y., Zhang, G., Tan, J., Wu, G., Wang, L.: Dual detrs for multi-label temporal action detection. In: CVPR. pp. 18559-18569 (2024) \ No newline at end of file diff --git a/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/images.zip b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7c7d4e10c0c2d517b5469ff0db8d911356c78bf8 --- /dev/null +++ b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce5776869e2e19f60dc977fdf4386e7759c8fe0f148d261497b0e97bf6661932 +size 635441 diff --git a/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/layout.json b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4ac6ac7b52462f93dad06f8e1a3355c0f0dee899 --- /dev/null +++ b/2024/ZeroI2V_ Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video/layout.json @@ -0,0 +1,9294 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 146, + 111, + 470, + 148 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 111, + 470, + 148 + ], + "spans": [ + { + "bbox": [ + 146, + 111, + 470, + 148 + ], + "type": "text", + "content": "ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "spans": [ + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "text", + "content": "Xinhao Li" + }, + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "text", + "content": ", Yuhan Zhu" + }, + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "text", + "content": ", and Limin Wang" + }, + { + "bbox": [ + 190, + 167, + 423, + 181 + ], + "type": "inline_equation", + "content": "^{1,2*}" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 152, + 189, + 462, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 152, + 189, + 462, + 201 + ], + "spans": [ + { + "bbox": [ + 152, + 189, + 462, + 201 + ], + "type": "text", + "content": "1 State Key Laboratory for Novel Software Technology, Nanjing University" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 252, + 201, + 362, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 201, + 362, + 213 + ], + "spans": [ + { + "bbox": [ + 252, + 201, + 362, + 213 + ], + "type": "text", + "content": "2 Shanghai AI Laboratory" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 152, + 213, + 461, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 152, + 213, + 461, + 224 + ], + "spans": [ + { + "bbox": [ + 152, + 213, + 461, + 224 + ], + "type": "text", + "content": "xinhaoli00@outlook.com zyuhan0812@gmail.com lmwang@nju.edu.cn" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 225, + 224, + 389, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 224, + 389, + 234 + ], + "spans": [ + { + "bbox": [ + 225, + 224, + 389, + 234 + ], + "type": "text", + "content": "https://github.com/MCG-NJU/ZeroI2V" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 159, + 266, + 455, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 266, + 455, + 540 + ], + "spans": [ + { + "bbox": [ + 159, + 266, + 455, + 540 + ], + "type": "text", + "content": "Abstract. Adapting image models to the video domain has emerged as an efficient paradigm for solving video recognition tasks. Due to the huge number of parameters and effective transferability of image models, performing full fine-tuning is less efficient and even unnecessary. Thus, recent research is shifting its focus toward parameter-efficient image-to-video adaptation. However, these adaptation strategies inevitably introduce extra computational costs to deal with the domain gap and temporal modeling in videos. In this paper, we present a new adaptation paradigm (ZeroI2V) to transfer the image transformers to video recognition tasks (i.e., introduce zero extra cost to the original models during inference). To achieve this goal, we present two core designs. First, to capture the dynamics in videos and reduce the difficulty of image-to-video adaptation, we exploit the flexibility of self-attention and introduce spatial-temporal dual-headed attention (STDHA). This approach efficiently endows the image transformers with temporal modeling capability at zero extra parameters and computation. Second, to handle the domain gap between images and videos, we propose a linear adaption strategy that utilizes lightweight densely placed linear adapters to fully transfer the frozen image models to video recognition. Thanks to the customized linear design, all newly added adapters could be easily merged with the original modules through structural reparameterization after training, enabling zero extra cost during inference. Extensive experiments on representative fully-supervised and few-shot video recognition benchmarks showcase that ZeroI2V can match or even outperform previous state-of-the-art methods while enjoying superior parameter and inference efficiency." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "spans": [ + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "type": "text", + "content": "Keywords: Video understanding " + }, + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "type": "text", + "content": " Image-to-video adaptation " + }, + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 160, + 551, + 451, + 562 + ], + "type": "text", + "content": " PEFT" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 584, + 231, + 596 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 584, + 231, + 596 + ], + "spans": [ + { + "bbox": [ + 132, + 584, + 231, + 596 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 609, + 482, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 609, + 482, + 647 + ], + "spans": [ + { + "bbox": [ + 130, + 609, + 482, + 647 + ], + "type": "text", + "content": "Adapting pre-trained foundation models such as BERT [11] and GPT [5, 52, 53] through efficient strategies has yielded excellent performance on downstream tasks in natural language understanding. This new paradigm is becoming popular in" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 653, + 236, + 666 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 653, + 236, + 666 + ], + "spans": [ + { + "bbox": [ + 133, + 653, + 236, + 666 + ], + "type": "text", + "content": "* Corresponding author." + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 149, + 116, + 305, + 254 + ], + "blocks": [ + { + "bbox": [ + 149, + 116, + 305, + 254 + ], + "lines": [ + { + "bbox": [ + 149, + 116, + 305, + 254 + ], + "spans": [ + { + "bbox": [ + 149, + 116, + 305, + 254 + ], + "type": "image", + "image_path": "9d885033d26f55992ceb6d9dd3af76d211ba17d868f74ae09e14e0fe9ce020f5.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 265, + 482, + 320 + ], + "lines": [ + { + "bbox": [ + 130, + 265, + 482, + 320 + ], + "spans": [ + { + "bbox": [ + 130, + 265, + 482, + 320 + ], + "type": "text", + "content": "Fig. 1: Left: Our proposed image-to-video transfer learning method. Right: Comparison of PETL methods on SSv2 validation set. For a more intuitive comparison, the views of the methods in the figure are all " + }, + { + "bbox": [ + 130, + 265, + 482, + 320 + ], + "type": "inline_equation", + "content": "8 \\times 3 \\times 1" + }, + { + "bbox": [ + 130, + 265, + 482, + 320 + ], + "type": "text", + "content": ". Two core techniques enable us to achieve superior performance on video tasks without introducing additional computation and parameters during inference." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 312, + 118, + 472, + 254 + ], + "blocks": [ + { + "bbox": [ + 312, + 118, + 472, + 254 + ], + "lines": [ + { + "bbox": [ + 312, + 118, + 472, + 254 + ], + "spans": [ + { + "bbox": [ + 312, + 118, + 472, + 254 + ], + "type": "image", + "image_path": "3a140e021a7c64e10bc70d0d3348dd3dab076997e9ac9bc9482cb5ef8c8708d3.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 349, + 482, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 349, + 482, + 434 + ], + "spans": [ + { + "bbox": [ + 130, + 349, + 482, + 434 + ], + "type": "text", + "content": "computer vision due to the available pre-trained image models such as CLIP [51] and DINO [7, 47]. These models could be easily adapted to downstream tasks through linear probes, fine-tuning, or even zero-shot recognition, exhibiting robustness and strong transfer capabilities similar to those of large-scale language models. Recently, parameter-efficient transfer learning (PETL) [9,23,38,46,48,78] is becoming an efficient paradigm to adapt these large pre-trained models due to their huge numbers of parameters and high computational cost of full fine-tuning." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 435, + 482, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 435, + 482, + 591 + ], + "spans": [ + { + "bbox": [ + 130, + 435, + 482, + 591 + ], + "type": "text", + "content": "For video understanding, there exist several large pre-trained video models [56, 59] from self-supervised learning, but these models are of high computational complexity due to the joint spatiotemporal attentions. Therefore, adapting pretrained image models to the video domain through efficient strategies is still a practical solution to video recognition. In fact, the state-of-the-art video networks have long relied on the pre-trained image models by inflating the kernels [1,8,39,41] or inserting plug-and-play temporal modules [33,37,42,60,61]. However, most of these methods necessitate full fine-tuning, which involves updating all the model parameters during training on video datasets. As the scale of pre-trained models increases, full fine-tuning becomes impractical due to the high training costs and the risk of overfitting or even catastrophic forgetting when the downstream data is limited. In addition, these methods often inevitably introduce extra costs to the adapted video models due to these newly added modules." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 482, + 666 + ], + "type": "text", + "content": "In this paper, we aim to present a new efficient paradigm of adapting image transformers to video downstream tasks with two main objectives. First, inspired by the PETL methods in NLP [21,22,26,31] and image understanding [9,23,46], we aim to devise a parameter-efficient transfer technique from image to video, which can effectively reduce the risk of over-fitting and greatly improve the training efficiency. Second, to overcome the issue of high computation in the adapted" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 163 + ], + "type": "text", + "content": "video models, we try to present a new adaptation method without introducing any extra computations to the final video models during inference. This zero extra inference cost adaptation would allow for more efficient deployment of transferred video models in real applications." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 164, + 483, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 164, + 483, + 295 + ], + "spans": [ + { + "bbox": [ + 130, + 164, + 483, + 295 + ], + "type": "text", + "content": "To achieve the above two objectives, we propose a novel transfer learning method (as shown in Figure 1) that can utilize the off-the-shelf pre-trained image transformers to achieve excellent performance on video tasks without additional parameters and computation during inference. To be specific, for the temporal modeling required for video tasks, we transform multi-head self-attention into spatio-temporal dual-head attention (STDHA) by reassigning some heads to achieve temporal modeling at zero computation and zero parameters. For image-to-video transfer, we explore the strategy of using linear adapters to fully adapt the parameters of each part of the model and merge them with the frozen original parameters through structural reparameterization after training, thus achieving zero extra cost during inference." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 297, + 483, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 297, + 483, + 465 + ], + "spans": [ + { + "bbox": [ + 130, + 297, + 483, + 465 + ], + "type": "text", + "content": "To summarize, we make the following contributions: 1) We propose a new approach for parameter-efficient image-to-video transfer learning that can achieve the efficient adaptation of transformers from image to video without introducing additional computation and parameters during inference. 2) We introduce a novel attention mechanism named Spatial-Temporal Dual-Headed Attention (STDHA), which utilizes the flexibility of self-attention to achieve temporal modeling without introducing extra computation and parameters. 3) To the best of our knowledge, we are the first to investigate the achievement of zero extra inference cost image-to-video adaptation through the utilization of a linear structure. We establish an empirical study by conducting extensive experiments with a diverse range of adaptation strategies. 4) Our method achieves comparable or even better performance than state-of-the-art methods on popular fully-supervised and few-shot video recognition benchmarks while enjoying the advantage of parameter and inference efficiency." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 483, + 234, + 495 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 483, + 234, + 495 + ], + "spans": [ + { + "bbox": [ + 132, + 483, + 234, + 495 + ], + "type": "text", + "content": "2 Related work" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 509, + 482, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 509, + 482, + 604 + ], + "spans": [ + { + "bbox": [ + 130, + 509, + 482, + 604 + ], + "type": "text", + "content": "Pre-trained image transformers The powerful scalability of ViT [12] brings more possibilities to the pre-trained image model. In addition to the traditional supervised approach [12,40,73], recent works [3,7,18,19,47] utilize self-supervised learning to effectively learn representations from unlabeled data. Moreover, several works [10,27,51,57] adopt large-scale multi-modal data (e.g., text-image pairs) to learn visual representations with great transferability. Our proposed adaptation strategy can leverage these off-the-shelf pre-trained image transformers to achieve outstanding performance on video tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "type": "text", + "content": "Video action recognition is crucial for downstream tasks [55, 79]. Traditionally, state-of-the-art methods have long relied on image models. Previous works for action recognition can be classified into two categories: one is to extend the image model for spatial-temporal modeling by inflating weights and structures [8, 13-15, 28, 34, 41], while the other is to directly utilize the image model as the" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 212 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 212 + ], + "type": "text", + "content": "backbone and insert plug-and-play modules for temporal modeling [37, 42, 60, 61, 77]. Following the success of new training paradigms in image understanding, several works have attempted to learn transferable video representations via self-supervised learning [43, 56, 59, 63] or multi-modal video-text pre-training [29, 30, 45, 62]. However, the above methods usually require full fine-tuning of the entire model or training from scratch, resulting in high training costs and additional computational overhead. In this work, we avoid the above problems by adapting the pre-trained image transformers to video tasks in an efficient manner." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 213, + 482, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 213, + 482, + 429 + ], + "spans": [ + { + "bbox": [ + 130, + 213, + 482, + 429 + ], + "type": "text", + "content": "Parameter-efficient transfer learning To address the issue of training inefficiency caused by the continuous growth of model size, Parameter-efficient transfer learning (PETL) is initially introduced in NLP [21, 22, 26, 31, 49, 50, 72] and subsequently applied to vision tasks [9, 20, 23, 36, 46, 68, 69, 78]. These techniques aim to achieve comparable or even superior performance on other tasks by fine-tuning only a small subset of trainable parameters. Most PETL methods [9, 20, 23, 36, 46, 76, 78] in vision domain are limited to transfer within the same modality (e.g., image-to-image or video-to-video). In contrast, our research focuses on image-to-video transfer learning. Despite progress made by recent studies [38, 48, 71], these methods require additional computation and parameters for temporal modeling of video tasks and image-to-video adaptation. For example, AVL [38] incorporates an additional temporal transformer decoder, while ST-Adapter [48] introduces additional adapters with depth-wise 3D convolution layers. Similarly, AIM [71] adds extra adapters and necessitates an additional time attention calculation at each block. In contrast to previous works, our proposed method eschews the introduction of additional computation or parameters during inference, yet still achieves comparable or superior performance compared to previous methods." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 448, + 233, + 462 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 448, + 233, + 462 + ], + "spans": [ + { + "bbox": [ + 132, + 448, + 233, + 462 + ], + "type": "text", + "content": "3 Methodology" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 475, + 482, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 475, + 482, + 536 + ], + "spans": [ + { + "bbox": [ + 130, + 475, + 482, + 536 + ], + "type": "text", + "content": "In this section, we first briefly revisit the basic block of ViT (Sec. 3.1), and then discuss how to utilize the flexibility of self-attention to achieve temporal modeling without introducing additional computation and parameters (Sec. 3.2). Finally, we explain how we implement zero-cost image-to-video adaptation with a serial linear structure (Sec. 3.3)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 555, + 222, + 567 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 555, + 222, + 567 + ], + "spans": [ + { + "bbox": [ + 132, + 555, + 222, + 567 + ], + "type": "text", + "content": "3.1 Preliminary" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 576, + 482, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 576, + 482, + 612 + ], + "spans": [ + { + "bbox": [ + 130, + 576, + 482, + 612 + ], + "type": "text", + "content": "The original ViT [12] block consists of two network layers: multi-head self-attention (MHSA) and multi-layer perceptron (MLP). As shown in Figure 1, a ViT block consists of MHSA and MLP connected in series in a residual structure:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 256, + 635, + 481, + 647 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 256, + 635, + 481, + 647 + ], + "spans": [ + { + "bbox": [ + 256, + 635, + 481, + 647 + ], + "type": "interline_equation", + "content": "z _ {l} = x _ {l} + \\operatorname {M H S A} (\\ln (x _ {l})), \\tag {1}", + "image_path": "6aed7f5db96c347db161a5727fffc0cfdb9505e9be5d0c128a690ba6246ce533.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 244, + 650, + 481, + 662 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 650, + 481, + 662 + ], + "spans": [ + { + "bbox": [ + 244, + 650, + 481, + 662 + ], + "type": "interline_equation", + "content": "x _ {l + 1} = z _ {l} + \\operatorname {M L P} (\\ln (z _ {l})), \\tag {2}", + "image_path": "0d356d6ce3fd451e00d4699595750c8953d98372049944ef8c39772a852f4a05.jpg" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 137, + 142, + 296, + 236 + ], + "blocks": [ + { + "bbox": [ + 137, + 142, + 296, + 236 + ], + "lines": [ + { + "bbox": [ + 137, + 142, + 296, + 236 + ], + "spans": [ + { + "bbox": [ + 137, + 142, + 296, + 236 + ], + "type": "image", + "image_path": "524b5aa9d19533adeb59ad91e6c63388c164c54a738242c8ea1e3c4964d9ebbe.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 299, + 117, + 462, + 238 + ], + "blocks": [ + { + "bbox": [ + 138, + 239, + 306, + 251 + ], + "lines": [ + { + "bbox": [ + 138, + 239, + 306, + 251 + ], + "spans": [ + { + "bbox": [ + 138, + 239, + 306, + 251 + ], + "type": "text", + "content": "(a) Layer merging via reparameterization" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 299, + 117, + 462, + 238 + ], + "lines": [ + { + "bbox": [ + 299, + 117, + 462, + 238 + ], + "spans": [ + { + "bbox": [ + 299, + 117, + 462, + 238 + ], + "type": "image", + "image_path": "f741fddc09ed19e5387c109fd781d3b0371b69bf92493ebdc486d10a532963b3.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 315, + 239, + 491, + 250 + ], + "lines": [ + { + "bbox": [ + 315, + 239, + 491, + 250 + ], + "spans": [ + { + "bbox": [ + 315, + 239, + 491, + 250 + ], + "type": "text", + "content": "(b) Spatial-temporal dual-headed attention" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 151, + 255, + 460, + 266 + ], + "lines": [ + { + "bbox": [ + 151, + 255, + 460, + 266 + ], + "spans": [ + { + "bbox": [ + 151, + 255, + 460, + 266 + ], + "type": "text", + "content": "Fig. 2: Illustration of the proposed linear adaptation and STDHA." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "spans": [ + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "text", + "content": "where LN denotes layer normalization [2] and " + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "inline_equation", + "content": "x_{l}" + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "text", + "content": " represents the input to the " + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "text", + "content": "-th ViT block. We review their specific implementation details. For the sake of simplicity, we ignore the bias and denote " + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "inline_equation", + "content": "X \\in \\mathbb{R}^{n \\times d}" + }, + { + "bbox": [ + 130, + 290, + 481, + 326 + ], + "type": "text", + "content": " as input of MHSA and MLP." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "spans": [ + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": "MHSA first performs three different linear projections " + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{attn}}^{Q}, W_{\\mathrm{attn}}^{K}, W_{\\mathrm{attn}}^{V} \\in \\mathbb{R}^{d \\times d}" + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": " on the input " + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": " to obtain the query " + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": " and key-value pairs " + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "inline_equation", + "content": "K, V" + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": ". These are then evenly divided into " + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": " heads by channel. Each head independently performs the scaled dot-product attention calculation. Finally, the heads are concatenated by channel and then a linear projection " + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{attn}}^{O} \\in \\mathbb{R}^{d \\times d}" + }, + { + "bbox": [ + 130, + 326, + 481, + 397 + ], + "type": "text", + "content": " is performed to obtain the final calculation result:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 217, + 406, + 481, + 421 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 217, + 406, + 481, + 421 + ], + "spans": [ + { + "bbox": [ + 217, + 406, + 481, + 421 + ], + "type": "interline_equation", + "content": "Q, K, V = X W _ {\\mathrm {a t t n}} ^ {Q}, X W _ {\\mathrm {a t t n}} ^ {K}, X W _ {\\mathrm {a t t n}} ^ {V}, \\tag {3}", + "image_path": "12c2e509a560194023e3c9f7c054515d2fd56b3d9bcaf98955f787adf5bc8465.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 227, + 423, + 481, + 436 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 423, + 481, + 436 + ], + "spans": [ + { + "bbox": [ + 227, + 423, + 481, + 436 + ], + "type": "interline_equation", + "content": "\\operatorname {h e a d} _ {i} = \\operatorname {A t t e n t i o n} \\left(Q _ {i}, K _ {i}, V _ {i}\\right), \\tag {4}", + "image_path": "b1b181d3ed7aeb77c8af3786d03f527b032f19fe31c438a0ff923536a0af32b7.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 205, + 438, + 481, + 453 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 438, + 481, + 453 + ], + "spans": [ + { + "bbox": [ + 205, + 438, + 481, + 453 + ], + "type": "interline_equation", + "content": "\\operatorname {M H S A} (X) = \\operatorname {C o n c a t} \\left(\\operatorname {h e a d} _ {1}, \\dots , \\operatorname {h e a d} _ {h}\\right) W _ {\\mathrm {a t t n}} ^ {O}. \\tag {5}", + "image_path": "6c92f8fbb1734ac9686447278a5988f222bd74ae8ed75e24c4027e01e2589984.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "spans": [ + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "text", + "content": "MLP involves two linear projections " + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{mlp}}^{\\mathrm{up}} \\in \\mathbb{R}^{d \\times d'}" + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{mlp}}^{\\mathrm{down}} \\in \\mathbb{R}^{d' \\times d}" + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "inline_equation", + "content": "d' > d" + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "text", + "content": " and one non-linear activation function " + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "inline_equation", + "content": "\\sigma" + }, + { + "bbox": [ + 130, + 462, + 481, + 487 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 239, + 496, + 481, + 512 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 239, + 496, + 481, + 512 + ], + "spans": [ + { + "bbox": [ + 239, + 496, + 481, + 512 + ], + "type": "interline_equation", + "content": "\\operatorname {M L P} (X) = \\sigma \\left(X W _ {\\mathrm {m l p}} ^ {\\mathrm {u p}}\\right) W _ {\\mathrm {m l p}} ^ {\\mathrm {d o w n}}. \\tag {6}", + "image_path": "21eada93e5d2c197b0b89c7221e1d4fd26c25db03ce551d71638ac91748b153b.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 131, + 537, + 309, + 550 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 537, + 309, + 550 + ], + "spans": [ + { + "bbox": [ + 131, + 537, + 309, + 550 + ], + "type": "text", + "content": "3.2 Zero-Cost temporal modeling" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 482, + 666 + ], + "type": "text", + "content": "Applying image models to video tasks often requires the incorporation of additional modules for temporal modeling, which not only introduces additional parameters and computation, but also results in additional training costs. In this work, we address temporal modeling from three key perspectives: (1) Capability of capturing the temporal dynamics. (2) Reducing the difficulty of image-to-video adaptation. (3) Minimizing the introduction of additional computation and parameters compared to the original model. [44] suggests that most heads are redundant given the rest of the model. Inspired by this, we attempt to reassign some heads as temporal heads in the multi-head attention to perform temporal" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "type": "text", + "content": "modeling tasks, while the remaining heads continue to perform spatial modeling tasks as spatial heads, thereby achieving efficient spatial-temporal modeling." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": "Spatial-temporal dual-headed attention (STDHA) As shown in Figure 2b, consider an input sequence " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "X = \\{x_{1}, x_{2}, \\dots, x_{T}\\}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "x_{t} \\in \\mathbb{R}^{n \\times d}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": ". Let the query and key-value pairs obtained after the linear projection of the " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "x_{t}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " be " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "Q^{t}, K^{t}, V^{t} \\in \\mathbb{R}^{n \\times d}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": ". We divide the " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " heads of the MHSA into two groups of size " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "h - k" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": ". One group of heads queries the key-value pairs at the current time " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " to perform spatial modeling, while the other group of heads queries the key-value pairs at other times " + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "inline_equation", + "content": "t + \\Delta t_{i}" + }, + { + "bbox": [ + 130, + 140, + 482, + 248 + ], + "type": "text", + "content": " to perform temporal modeling. Finally, the information from the two groups of heads is aggregated by a linear projection to perform spatial-temporal modeling:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 157, + 256, + 482, + 270 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 157, + 256, + 482, + 270 + ], + "spans": [ + { + "bbox": [ + 157, + 256, + 482, + 270 + ], + "type": "interline_equation", + "content": "\\text {S - h e a d} _ {i} = \\text {A t t e n t i o n} \\left(Q _ {i} ^ {t}, K _ {i} ^ {t}, V _ {i} ^ {t}\\right), \\tag {7}", + "image_path": "f4b94b12a526a6876f3d249eb841c903d028f9737b92f52ed2d337605f1752dc.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 156, + 272, + 481, + 287 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 156, + 272, + 481, + 287 + ], + "spans": [ + { + "bbox": [ + 156, + 272, + 481, + 287 + ], + "type": "interline_equation", + "content": "\\text {T - h e a d} _ {i} = \\operatorname {A t t e n t i o n} \\left(Q _ {i} ^ {t}, K _ {i} ^ {t + \\Delta t _ {i}}, V _ {i} ^ {t + \\Delta t _ {i}}\\right) (\\Delta t _ {i} \\neq 0), \\tag {8}", + "image_path": "d294195081595ad9ce37f3b724d5b05996350a7ca62f749b66ce54c735ae44a2.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 137, + 289, + 481, + 304 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 289, + 481, + 304 + ], + "spans": [ + { + "bbox": [ + 137, + 289, + 481, + 304 + ], + "type": "interline_equation", + "content": "\\operatorname {S T D H A} (X) = \\operatorname {C o n c a t} (\\mathrm {T} - \\text {h e a d} _ {1}, \\dots , \\mathrm {T} - \\text {h e a d} _ {k}, \\mathrm {S} - \\text {h e a d} _ {k + 1} \\dots \\mathrm {S} - \\text {h e a d} _ {h}) W _ {\\text {a t t n}} ^ {O}, \\tag {9}", + "image_path": "3179a75d7b8939d471fb6e482371fcfbe15a73ffe1c79f2547f8c9715c713670.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "spans": [ + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "type": "inline_equation", + "content": "\\Delta t_{i}" + }, + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "type": "text", + "content": " represents the time offset of the key-value pair of the " + }, + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 312, + 479, + 371 + ], + "type": "text", + "content": "-th head. We did not directly use temporal attention or temporal convolution for the temporal modeling like previous works [38, 48, 71]. Instead, we design a more efficient spatiotemporal modeling operator by decoupling spatial modeling and temporal modeling to different heads:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 380, + 480, + 500 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 138, + 380, + 480, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 380, + 480, + 416 + ], + "spans": [ + { + "bbox": [ + 138, + 380, + 480, + 416 + ], + "type": "text", + "content": "- For the spatial head, it still only needs to complete the spatial modeling task as the original image transformer, which reduces the difficulty of achieving image-to-video adaptation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 417, + 480, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 417, + 480, + 500 + ], + "spans": [ + { + "bbox": [ + 138, + 417, + 480, + 500 + ], + "type": "text", + "content": "- For the temporal head, it actually implements the inter-frame attention mechanism with frames at different times. [74] have demonstrated the effectiveness of an inter-frame attention mechanism for modeling motion information, which is crucial for action recognition tasks. In addition, as shown in Table 1c, we can achieve both short-distance and long-distance modeling by controlling the " + }, + { + "bbox": [ + 138, + 417, + 480, + 500 + ], + "type": "inline_equation", + "content": "\\Delta t_{i}" + }, + { + "bbox": [ + 138, + 417, + 480, + 500 + ], + "type": "text", + "content": " of the temporal head, which enables us to achieve enhanced temporal modeling capabilities." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "content": "Comparison with other zero-cost operators There have been several previous attempts [6, 66, 75] to use image transformers to achieve efficient temporal modeling at zero parameters and zero computation. For example, [6] achieves approximations to full space-time attention by mixing tokens from adjacent frames. [75] performs temporal modeling by using channel shift on thecls tokens of different frames. [66] mixes information from adjacent frames using temporal patch shift and temporal channel shift before MHSA. However, these methods do not take advantage of the inherent characteristics of the transformer structure. By decoupling the learning of spatial and temporal information with head relocation, STDHA maintains the purity of key-value pair information within the same head, thereby achieving better spatial-temporal information learning than other zero-cost temporal modules. And STDHA simultaneously captures both short-range and long-range dependencies, rather than being limited to" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 101 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 140 + ], + "type": "text", + "content": "adjacent frames. As shown in Table 1, these two key distinctions enable our STDHA to achieve superior spatial-temporal modeling." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 131, + 156, + 428, + 168 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 156, + 428, + 168 + ], + "spans": [ + { + "bbox": [ + 131, + 156, + 428, + 168 + ], + "type": "text", + "content": "3.3 Zero Extra Inference Cost image-to-video adaptation" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 173, + 479, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 173, + 479, + 243 + ], + "spans": [ + { + "bbox": [ + 130, + 173, + 479, + 243 + ], + "type": "text", + "content": "Inspired by LoRA [22], we can fine-tune the model using a linear structure and then merge it with the original model during inference. However, to deal with the domain gap between images and videos, previous works [38,48,71] often use nonlinear structures to achieve stronger transfer capabilities. Therefore, we need to further consider how to achieve effective image-to-video transfer using only a linear structure." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "spans": [ + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "text", + "content": "Layer merging via structural reparameterization Let " + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{old}}" + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "text", + "content": " represent the frozen weights of the original model, and " + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{new}}" + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "text", + "content": " represent the new trainable weights. Reviewing the structure of LoRA, it uses a low-rank decomposition matrix " + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{LoRA}}" + }, + { + "bbox": [ + 130, + 245, + 479, + 293 + ], + "type": "text", + "content": " parallel to the original weights:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 209, + 300, + 481, + 312 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 209, + 300, + 481, + 312 + ], + "spans": [ + { + "bbox": [ + 209, + 300, + 481, + 312 + ], + "type": "interline_equation", + "content": "W _ {\\text {n e w}} = W _ {\\text {L o R A}} + W _ {\\text {o l d}} = W _ {\\text {u p}} W _ {\\text {d o w n}} + W _ {\\text {o l d}}. \\tag {10}", + "image_path": "0bf214628af4080635f14e43e86207325c35696f67cf7d041f46bb0777306cf6.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 319, + 479, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 319, + 479, + 354 + ], + "spans": [ + { + "bbox": [ + 130, + 319, + 479, + 354 + ], + "type": "text", + "content": "In this work, we use a serial linear structure called Linear Adapter to fine-tune the original parameters. As shown in Figure 2a, we use structural reparameterization to perform layer merging after training:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 205, + 361, + 481, + 374 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 361, + 481, + 374 + ], + "spans": [ + { + "bbox": [ + 205, + 361, + 481, + 374 + ], + "type": "interline_equation", + "content": "W _ {\\text {n e w}} = W _ {\\text {A d a p t e r}} W _ {\\text {o l d}} = \\left(I + W _ {\\text {u p}} W _ {\\text {d o w n}}\\right) W _ {\\text {o l d}}, \\tag {11}", + "image_path": "27a149e59236731c61413274b707966b4783d3cdd5b57f308d725980dc5b2f1f.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "spans": [ + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "inline_equation", + "content": "I" + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "text", + "content": " is the identity matrix, " + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{up}} \\in \\mathbb{R}^{m \\times k}" + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{down}} \\in \\mathbb{R}^{k \\times n}" + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "text", + "content": ", bottleneck width " + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "inline_equation", + "content": "k \\ll \\min(m, n)" + }, + { + "bbox": [ + 130, + 380, + 479, + 453 + ], + "type": "text", + "content": ". As seen in Table 2, compared to parallel structures, serial structures can be more flexibly inserted into the network structure (e.g., for non-square matrices, under the same bottleneck dimension, using LoRA requires a larger number of parameters compared to Linear Adapter), which endows it with better transfer capabilities." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 453, + 480, + 585 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 453, + 480, + 585 + ], + "spans": [ + { + "bbox": [ + 130, + 453, + 480, + 585 + ], + "type": "text", + "content": "Full adaptation with densely placed linear adapters By observing the structure of MHSA and MLP, we can see that all their trainable parameters concentrate on the linear projections at both ends of the structure. Therefore, fine-tuning the model essentially updates these linear projections. Previous works [48, 71] often selectively tune part of the parameters (e.g., placing only an adapter before MHSA) instead of tuning all parameters to avoid excessive additional computational and parameter costs, while we can achieve zero-cost full adaptation by tuning all parameters through wrapping MHSA and MLP with linear adapters. Table 2 shows that full adaptation enables us to achieve excellent image-to-video transfer performance with a linear structure, compensating for the performance degradation caused by the removal of nonlinearity." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 131, + 601, + 230, + 615 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 601, + 230, + 615 + ], + "spans": [ + { + "bbox": [ + 131, + 601, + 230, + 615 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 131, + 624, + 255, + 636 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 624, + 255, + 636 + ], + "spans": [ + { + "bbox": [ + 131, + 624, + 255, + 636 + ], + "type": "text", + "content": "4.1 Experiments setup" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 641, + 479, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 479, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 479, + 666 + ], + "type": "text", + "content": "We evaluate our method on five widely-used video recognition benchmarks: two large-scale datasets, namely Kinetics-400 (K400) [8] and Something-Something V2" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 145, + 194, + 293, + 270 + ], + "blocks": [ + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "lines": [ + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "spans": [ + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": "Table 1: Ablation study on STDHA. Most of the symbols in the table have been declared in the methodology section 3. (a) " + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "R_{c}" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": " denotes channel change ratio, \"Shift\" refers to temporal channel shift, while \"HR\" denotes head relocation as used by STDHA. (b) We use a multiset to represent the time offsets of different heads (e.g., \"1·2\" means that there are 2 heads with " + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "\\Delta t = 1" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": "). When " + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "\\Delta t = 0" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": ", it represents a spatial head. (c) \"Temporal RF\" refers to the temporal receptive field of a single STDHA." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 145, + 194, + 293, + 270 + ], + "lines": [ + { + "bbox": [ + 145, + 194, + 293, + 270 + ], + "spans": [ + { + "bbox": [ + 145, + 194, + 293, + 270 + ], + "type": "table", + "html": "
RcMethodTop-1
1/6[cls] token shift61.4
Shift QKV64.5
Shift KV64.6
HR QKV64.8
HR KV (STDHA)66.0
1/4Shift KV64.0
HR KV (STDHA)65.8
", + "image_path": "77e78c5e5d8e6666a631fe257a0ad7666c3e5b508a0d0a8251683faa375a6786.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 304, + 194, + 479, + 271 + ], + "blocks": [ + { + "bbox": [ + 143, + 271, + 291, + 281 + ], + "lines": [ + { + "bbox": [ + 143, + 271, + 291, + 281 + ], + "spans": [ + { + "bbox": [ + 143, + 271, + 291, + 281 + ], + "type": "text", + "content": "(a) Compare temporal modeling methods" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 304, + 194, + 479, + 271 + ], + "lines": [ + { + "bbox": [ + 304, + 194, + 479, + 271 + ], + "spans": [ + { + "bbox": [ + 304, + 194, + 479, + 271 + ], + "type": "table", + "html": "
BackboneΔt of headskTop-1
ViT-B (h=12){1·1/2, -1·1/2, 0·11}164.8
{1·1, -1·1, 0·10}266.0
{1·2, -1·2, 0·8}465.6
{1·3, -1·3, 0·6}665.6
ViT-L (h=16){1·1, -1·1, 0·14}267.7
{1·2, -1·2, 0·12}468.5
{1·3, -1·3, 0·10}668.3
", + "image_path": "b1978b5d64621eb435eb7d57bd203031523c2e613fb4d5f2bfe92c08569c0a57.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 170, + 291, + 447, + 427 + ], + "blocks": [ + { + "bbox": [ + 318, + 273, + 460, + 282 + ], + "lines": [ + { + "bbox": [ + 318, + 273, + 460, + 282 + ], + "spans": [ + { + "bbox": [ + 318, + 273, + 460, + 282 + ], + "type": "text", + "content": "(b) Effect of the temporal head number" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 170, + 291, + 447, + 427 + ], + "lines": [ + { + "bbox": [ + 170, + 291, + 447, + 427 + ], + "spans": [ + { + "bbox": [ + 170, + 291, + 447, + 427 + ], + "type": "table", + "html": "
FramesΔt of headsTemporal RFTop-1
8{1·1,0·11}264.7
{1·1,-1·1,0·10}366.0
{1·1,-1·1,2·1,0·9}465.5
{1·1,-1·1,2·1,-2·1,0·8}565.7
16{1·1,-1·1,0·10}367.2
{1·1,-1·1,2·1,0·9}467.3
{1·1,-1·1,2·1,-2·1,0·8}567.8
{1·1,-1·1,2·1,-2·1,3·1,0·7}667.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}767.3
32{1·1,-1·1,0·10}367.3
{1·1,-1·1,2·1,0·9}467.8
{1·1,-1·1,2·1,-2·1,0·8}568.5
{1·1,-1·1,2·1,-2·1,3·1,0·7}668.6
{1·1,-1·1,2·1,-2·1,3·1,-3·1,0·6}768.4
{1·1,-1·1,2·1,-2·1,3·1,-3·1,4·1,0·5}868.2
", + "image_path": "3c6f52f3ba26961259d3eac1a2cbbc7bf24371f589cbe1ee3dec6d738df4d1f2.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 187, + 429, + 424, + 438 + ], + "lines": [ + { + "bbox": [ + 187, + 429, + 424, + 438 + ], + "spans": [ + { + "bbox": [ + 187, + 429, + 424, + 438 + ], + "type": "text", + "content": "(c) Effect of the temporal receptive field at different input lengths." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 130, + 471, + 482, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 471, + 482, + 544 + ], + "spans": [ + { + "bbox": [ + 130, + 471, + 482, + 544 + ], + "type": "text", + "content": "(SSv2) [16], in addition to three smaller-scale datasets, UCF101 [54], HMDB51 [25] and Diving48 [35]. We also evaluate our method on action detection dataset AVA [17]. This diverse dataset selection allows for a comprehensive evaluation of our model across various scales and domains. The specific model configuration and training strategy can be found in the supplementary. For most main experiments, we use ViT-B and ViT-L pre-trained by CLIP [51] as our backbone models." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 131, + 561, + 236, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 561, + 236, + 574 + ], + "spans": [ + { + "bbox": [ + 131, + 561, + 236, + 574 + ], + "type": "text", + "content": "4.2 Ablation study" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 581, + 482, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 581, + 482, + 628 + ], + "spans": [ + { + "bbox": [ + 130, + 581, + 482, + 628 + ], + "type": "text", + "content": "To validate the effectiveness of our method on image-to-video transfer and temporal modeling, we first conduct ablation experiments on the SSv2 dataset. All ablation experiments were performed using ViT-B/16 with 8 input frames unless specified." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 629, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 629, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 629, + 482, + 666 + ], + "type": "text", + "content": "Effectiveness of STDHA Table 1a compares STDHA with other zero-cost temporal modeling methods. The [cls] token shift is implemented according to the original paper [75], with [cls] token shift performed before MHSA and MLP." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 135, + 193, + 479, + 414 + ], + "blocks": [ + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "lines": [ + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "spans": [ + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": "Table 2: Comparison of adaption strategies. \"Width\" refers to the bottleneck width of LoRA/Adapter. \"Tunable Params\" refers to extra trainable parameters besides the parameters of the ViT backbone and linear classifier. \"" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "\\checkmark" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": "\" and \"" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "\\times" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": "\" indicate whether the corresponding weights have undergone fine-tuning, and \"" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "\\checkmark" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": "\" indicates that " + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{attn}}^{Q}" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{attn}}^{K}" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{attn}}^{V}" + }, + { + "bbox": [ + 130, + 118, + 482, + 184 + ], + "type": "text", + "content": " share the same adapter. \"Latency\" refers to inference latency with 3 samples. All results are obtained using the same V100-32G with PyTorch-built mixed precision." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 135, + 193, + 479, + 414 + ], + "lines": [ + { + "bbox": [ + 135, + 193, + 479, + 414 + ], + "spans": [ + { + "bbox": [ + 135, + 193, + 479, + 414 + ], + "type": "table", + "html": "
MethodWeights of ViT blockTunable \nParams(M)Bottleneck \nWidthLatencySSv2 \n(ms)Top-1
WQattnWKattnWVattnWOattnWupmlpWdownmlp
Full Fine-tuning86-28.963.2
Linear ProbeXXXXXX0-28.920.0
Only tuning temporal headXX4.6-28.959.6
ST-Adapter [48]1419241.066.2
XX1438438.865.8
LoRA [22]XXXX719264.2
XX1419265.0
XX2519264.3
XX1712828.965.6
3219265.0
2112865.5
Adapter w/ GELU79637.365.6
XX719234.964.6
X1019236.366.1
1419238.466.1
Linear Adapter (Ours)79665.0
XX719264.4
X1019228.965.2
1419266.0
2019266.3
1412866.2
", + "image_path": "ed9ffc6cd3d3537e1a88bb302a8738368f80d3b014080819bd6563bbf0d5de0d.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 456, + 482, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 456, + 482, + 504 + ], + "spans": [ + { + "bbox": [ + 130, + 456, + 482, + 504 + ], + "type": "text", + "content": "The temporal channel shift operation refers to TPS [66], which shifts a portion of the channels for each head. It can be seen that STDHA significantly outperforms other methods at the same channel change ratio, demonstrating the importance of preserving the purity of information within each head." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "spans": [ + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "type": "text", + "content": "Effect of the number of temporal heads and temporal receptive field We examined the influence of the number of temporal heads and the temporal receptive field in ViT-B and ViT-L. Our findings, detailed in Tables 1b and 1c, suggest that the optimal proportion of temporal heads in ViT lies between " + }, + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "type": "inline_equation", + "content": "1/6" + }, + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "type": "inline_equation", + "content": "1/4" + }, + { + "bbox": [ + 130, + 513, + 482, + 609 + ], + "type": "text", + "content": ". For the temporal receptive field, our results indicate that for 8-frame inputs, a field of 3 is sufficient, while for longer inputs (16/32 frames), performance improves with an increase in the field from 3, saturating at around 5 or 6. Hence, we employ different STDHA configurations based on input length." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 482, + 666 + ], + "type": "text", + "content": "Comparison of adaptation strategies In Table 2, we compare the image-to-video transfer ability of our method with a diverse range of adaptation methods. For a fair comparison, we all use STDHA with the same setting to provide temporal modeling capabilities. From the results, we can observe that:" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 151, + 182, + 462, + 399 + ], + "blocks": [ + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "lines": [ + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "spans": [ + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "type": "text", + "content": "Table 3: Results on Kinetics-400 validation set. Views = #frames × #spatial crops × #temporal clips. \"GFLOPs\" means " + }, + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "type": "inline_equation", + "content": "10^{9}" + }, + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "type": "text", + "content": " FLOPs, \"M\" means " + }, + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "type": "inline_equation", + "content": "10^{6}" + }, + { + "bbox": [ + 130, + 118, + 482, + 173 + ], + "type": "text", + "content": ". \"Extra GLOPs\" refers to the extra computation added to the original ViT under the same number of views. \"New Params\" refers to additional parameters during inference besides the parameters of the original ViT backbone and linear classifier." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 151, + 182, + 462, + 399 + ], + "lines": [ + { + "bbox": [ + 151, + 182, + 462, + 399 + ], + "spans": [ + { + "bbox": [ + 151, + 182, + 462, + 399 + ], + "type": "table", + "html": "
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
UniFormer-B [28]IN1K32×3×43108-50-83.095.4
TimeSformer-L [4]IN21K96×3×17140-121-80.794.7
VideoSwin-L [41]IN21K32×3×47248-197-83.195.9
MViTv2-L(↑312) [34]IN21K40×5×342420-218-86.197.0
ViViT-L/16x2 FE [1]JFT32×3×111940-311-83.594.3
MTV-L [70]JFT32×3×418050-876-84.396.3
ViT-B/16 [48]CLIP8×1×3422086081.095.5
ActionCLIP-B/16 [62]CLIP32×3×1016893131425683.897.1
X-CLIP ViT-L/14 [45]CLIP8×3×4789610742011687.197.6
Text4Vis ViT-L/14 [65]CLIP32×3×419944-3474387.197.4
Methods with PETL
VideoPrompt ViT-B/16 [24]CLIP16×5×1----76.993.5
ST-Adapter ViT-B/16 [48]IN21K8×1×34553393776.6-
ST-Adapter ViT-L/14 [48]CLIP32×1×382483221987.297.6
EVL ViT-B/16 [38]IN21K8×1×3454321152975.4-
EVL ViT-L/14 [38]CLIP8×1×32022763625886.3-
AIM ViT-B-14 [71]IN21K8×1×36242021001478.8-
AIM ViT-L/14 [71]CLIP32×1×31120834253413887.597.7
Zeroi2V ViT-B/16IN21K8×1×3422086078.6-
Zeroi2V ViT-B/16CLIP8×1×3422086083.095.8
Zeroi2V ViT-B/16CLIP16×1×3844086083.496.2
Zeroi2V ViT-B/16CLIP32×1×31688086083.796.4
Zeroi2V ViT-L/14CLIP8×1×319460304086.397.4
Zeroi2V ViT-L/14CLIP16×1×338920304086.897.6
Zeroi2V ViT-L/14CLIP32×1×377830304087.297.6
", + "image_path": "8be562ba9123b4ca638fee97108d89059fe13ea3143983144fd9a9d63d7dd1c7.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 137, + 430, + 481, + 647 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 137, + 430, + 481, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 430, + 481, + 467 + ], + "spans": [ + { + "bbox": [ + 137, + 430, + 481, + 467 + ], + "type": "text", + "content": "- Even with minimal parameters being fine-tuned, our Linear Adapter significantly outperforms full fine-tuning (66.3 vs 63.2). Despite updating the fewest parameters, the linear probe performs poorly in image-to-video transfer." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 137, + 471, + 481, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 471, + 481, + 519 + ], + "spans": [ + { + "bbox": [ + 137, + 471, + 481, + 519 + ], + "type": "text", + "content": "- Tuning only the temporal head achieves about " + }, + { + "bbox": [ + 137, + 471, + 481, + 519 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 137, + 471, + 481, + 519 + ], + "type": "text", + "content": " of the full fine-tuning performance, suggesting that extensive fine-tuning of the spatial head may not be necessary to attain satisfactory transfer performance due to the decoupling of spatial and temporal modeling reduces the difficulty of adaptation." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 137, + 522, + 481, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 522, + 481, + 571 + ], + "spans": [ + { + "bbox": [ + 137, + 522, + 481, + 571 + ], + "type": "text", + "content": "- Our Full Adaptation strategy is not only effective for linear adapters, but also for non-linear adapters such as the ST-Adapter and GELU Adapter. It not only enhances their adaptation performance, but also eliminates the performance gap between linear and non-linear structures." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 137, + 575, + 481, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 575, + 481, + 647 + ], + "spans": [ + { + "bbox": [ + 137, + 575, + 481, + 647 + ], + "type": "text", + "content": "- Due to the inflexibility of the parallel structure, for non-square matrices like " + }, + { + "bbox": [ + 137, + 575, + 481, + 647 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{mlp}}" + }, + { + "bbox": [ + 137, + 575, + 481, + 647 + ], + "type": "text", + "content": ", LoRA requires more parameters under the same bottleneck width. It needs to decrease the bottleneck width of the low-rank matrix to align it with the number of parameters of the linear adapter. However, this reduction in bottleneck width can limit its adaptation ability, ultimately leading to results that are significantly worse than those of the Linear Adapter." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 150, + 161, + 462, + 370 + ], + "blocks": [ + { + "bbox": [ + 131, + 118, + 482, + 150 + ], + "lines": [ + { + "bbox": [ + 131, + 118, + 482, + 150 + ], + "spans": [ + { + "bbox": [ + 131, + 118, + 482, + 150 + ], + "type": "text", + "content": "Table 4: Results on Something-Something v2 validation set. " + }, + { + "bbox": [ + 131, + 118, + 482, + 150 + ], + "type": "inline_equation", + "content": "\\dagger" + }, + { + "bbox": [ + 131, + 118, + 482, + 150 + ], + "type": "text", + "content": " indicates that the model is pre-trained on both IN21K (except for Uniformer [28] which uses IN1K) and K400/K600. Other notations are the same as Table 3." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 150, + 161, + 462, + 370 + ], + "lines": [ + { + "bbox": [ + 150, + 161, + 462, + 370 + ], + "spans": [ + { + "bbox": [ + 150, + 161, + 462, + 370 + ], + "type": "table", + "html": "
MethodsPretrainViewsGFLOPsExtra GFLOPsParam (M)New Param(M)Top-1Top-5
Methods with full fine-tuning
TimeFormer-L [4]IN21K64×3×17140-121-62.4-
ViViT-L [1]K400†16×3×411892-311-65.489.8
MTV-B(↑320) [70]K400†32×3×411160-310-68.590.4
VideoSwin-B [41]K400†32×3×1963-89-69.692.7
MViTv2-L(↑312) [34]K400†40×3×18484-213-73.394.1
UniFormer-B [28]K600†32×3×1777-50-71.292.8
ViT-L/14 [12]CLIP8×3×119460304048.777.5
ILA ViT-L/14 [58]CLIP8×3×410884310052922567.890.5
Methods with PETL
ST-Adapter ViT-B/16 [48]IN21K8×3×14553393762.8-
ST-Adapter ViT-B/16 [48]CLIP32×3×119552671001469.592.6
EVL ViT-L/14 [38]CLIP32×3×19641185847917566.7-
AIM ViT-B/16IN21K8×3×16242021001462.0-
AIM ViT-L/14 [71]CLIP32×3×11150837253545070.692.7
ZeroI2V ViT-B/16IN21K8×3×1422086065.3-
ZeroI2V ViT-B/16CLIP8×3×1422086067.790.8
ZeroI2V ViT-B/16CLIP16×3×1844086069.491.7
ZeroI2V ViT-B/16CLIP32×3×11688086070.192.4
ZeroI2V ViT-L/14CLIP8×3×119460304070.191.8
ZeroI2V ViT-L/14CLIP16×3×138920304071.493.0
ZeroI2V ViT-L/14CLIP32×3×177830304072.293.0
", + "image_path": "b38f6260f91c7334d45b491f32df097bef01ab6da72cfa0b0d6fd63ee3101062.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 393, + 309, + 404 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 393, + 309, + 404 + ], + "spans": [ + { + "bbox": [ + 132, + 393, + 309, + 404 + ], + "type": "text", + "content": "4.3 Fully-supervised Experiments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 414, + 482, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 414, + 482, + 605 + ], + "spans": [ + { + "bbox": [ + 130, + 414, + 482, + 605 + ], + "type": "text", + "content": "Results on K400 As shown in Table 3, our method has significant advantages over traditional full fine-tuning methods, achieving better performance with much lower computational cost. For example, our ZeroI2V ViT-L/14 with an input of 8 frames outperforms MViTv2 [34] (86.3 vs 86.1), while requiring more than 20 times fewer GFLOPs (1946 vs 42420). Compared to multi-modal methods such as ActionCLIP [62] and X-CLIP [45], which require an additional text branch and fine-tune the entire model end-to-end, our ZeroI2V can achieve comparable performance using only the visual encoder. Moreover, although our proposed ZeroI2V doesn't increase computational or parameter costs during inference compared with the previous PETL method, it can still achieve similar or even better performance. For example, on ViT-B/16, ZeroI2V with an input of 8 frames can surpass ST-Adapter [48] with an input of 32 frames (83.0 vs 82.7) with much lower GFLOPs (422 vs 1821). On ViT-L/14, ZeroI2V achieves the same performance as EVL [38], which requires an additional 58M parameters. And ZeroI2V achieves comparable performance to AIM [71] (87.2 vs 87.5) with a nearly " + }, + { + "bbox": [ + 130, + 414, + 482, + 605 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 130, + 414, + 482, + 605 + ], + "type": "text", + "content": " reduction in GFLOPs (7783 vs 11208)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 605, + 482, + 665 + ], + "type": "text", + "content": "Results on SSv2 As shown in Table 4, thanks to the effectiveness of STDHA in temporal modeling, our method outperforms most full fine-tuning methods, even though many of them have been pre-trained on the Kinetics dataset. Our ZeroI2V has a significant improvement compared to directly full fine-tuning ViT-L/16 pre-trained with CLIP (70.1 vs 48.7) with the same number of parameters" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 159, + 171, + 452, + 331 + ], + "blocks": [ + { + "bbox": [ + 130, + 118, + 482, + 162 + ], + "lines": [ + { + "bbox": [ + 130, + 118, + 482, + 162 + ], + "spans": [ + { + "bbox": [ + 130, + 118, + 482, + 162 + ], + "type": "text", + "content": "Table 5: Comparing the state-of-the-art video recognition methods on UCF101, HMDB51 and Diving48. For UCF101 and HMDB51, we test our method and report the 3-split mean Top-1 accuracy for both datasets following ST-Adapter [48]. And for Diving48, we test our method with 1 temporal clip following AIM [71]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 159, + 171, + 452, + 331 + ], + "lines": [ + { + "bbox": [ + 159, + 171, + 452, + 331 + ], + "spans": [ + { + "bbox": [ + 159, + 171, + 452, + 331 + ], + "type": "table", + "html": "
MethodPretrainUCF101HMDB51Diving48
Methods with full fine-tuning
I3D [8]ImageNet+K40095.674.8-
S3D [67]ImageNet+K40096.875.9-
SlowOnly-8x8-R101 [15]Kinetics+OmniSource97.379.0-
TimeSformer-L [4]IN21K--81.0
VideoSwin-B [41]IN21K--81.9
Methods with PETL
VideoPrompt [24]CLIP93.666.4-
AIM ViT-B/16 [71]CLIP--88.9
AIM ViT-L/14 [71]CLIP--90.6
ST-Adapter ViT-B/16 [48]CLIP+K40096.477.7-
ST-Adapter ViT-L/14 [48]CLIP+K40098.181.7-
ZeroI2V ViT-B/16CLIP95.673.789.7
ZeroI2V ViT-B/16CLIP+K40097.778.5-
ZeroI2V ViT-L/14CLIP97.879.991.4
ZeroI2V ViT-L/14CLIP+K40098.683.4-
", + "image_path": "c0115593e5e7cd839f39d1bfe4e5150a96be4f94cf6eb2925dfc657b4b6cbdbb.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 159, + 365, + 452, + 437 + ], + "blocks": [ + { + "bbox": [ + 145, + 344, + 467, + 356 + ], + "lines": [ + { + "bbox": [ + 145, + 344, + 467, + 356 + ], + "spans": [ + { + "bbox": [ + 145, + 344, + 467, + 356 + ], + "type": "text", + "content": "Table 6: Comparing the SoTA action detection methods on AVA 2.2." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 159, + 365, + 452, + 437 + ], + "lines": [ + { + "bbox": [ + 159, + 365, + 452, + 437 + ], + "spans": [ + { + "bbox": [ + 159, + 365, + 452, + 437 + ], + "type": "table", + "html": "
MethodPretrainFrozen BackboneFramesmAP
SlowFast-R101 [15]K400823.8
MViTv2-B [34]K4003228.1
VideoMAE-B [56]K4001631.8
VideoMAE-B [56]K400 wo/ labels1626.7
CLIP ViT-B/16CLIP818.3
ZeroI2V ViT-B/16CLIP826.4
", + "image_path": "4f1624dc2080bbb2e5a7e54a7ae853fb940326ecffe9dc182889ae29ef409158.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 461, + 482, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 461, + 482, + 521 + ], + "spans": [ + { + "bbox": [ + 130, + 461, + 482, + 521 + ], + "type": "text", + "content": "and computation. Compared to other PETL methods, ZeroI2V outperforms ST-Adapter [48] on ViT-B/16 (70.1 vs 69.5) with lower GFLOPs (1688 vs 1955). Additionally, ZeroI2V significantly surpasses both AVL [38] and AIM [71] (71.4 vs 66.7, 70.6) on ViT-L/14 with much lower GFLOPs (3892 vs 9641, 11508) and new parameters (0M vs 175M, 50M)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 521, + 482, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 521, + 482, + 569 + ], + "spans": [ + { + "bbox": [ + 130, + 521, + 482, + 569 + ], + "type": "text", + "content": "Results on smaller datasets As shown in Table 5, on three relatively small datasets, our method achieves state-of-the-art performance on UCF101, HMDB51, and Diving48. This demonstrates a clear performance advantage over both full-finetuning methods and PETL methods previously." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 665 + ], + "type": "text", + "content": "Results on action detection In addition to the task of action recognition, to understand the capability of our method in fine-grained spatial understanding, we also evaluate our method on action detection dataset AVA [17]. Following the setting of VideoMAE [56], we evaluate the top 60 common classes using the mean Average Precision (mAP) as the metric under an IoU threshold of 0.5. As shown in Table 6, compared to using the original image CLIP features, our ZeroI2V achieved a significant performance improvement (26.4 vs 18.3) with the same number of parameters and computation. It's noteworthy that our method was not" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 151, + 148, + 462, + 204 + ], + "blocks": [ + { + "bbox": [ + 132, + 117, + 479, + 129 + ], + "lines": [ + { + "bbox": [ + 132, + 117, + 479, + 129 + ], + "spans": [ + { + "bbox": [ + 132, + 117, + 479, + 129 + ], + "type": "text", + "content": "Table 7: Comparing the SoTA video recognition methods on the VidTAB [32]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 151, + 148, + 462, + 204 + ], + "lines": [ + { + "bbox": [ + 151, + 148, + 462, + 204 + ], + "spans": [ + { + "bbox": [ + 151, + 148, + 462, + 204 + ], + "type": "table", + "html": "
# Pretrain DataAvgActionScienceSafetyQualityEmotion
DS LVMS ABHC FFQAEA
CLIP ViT-L/14 [51]CLIP42.831.2 38.032.3 36.350.3 58.567.728.1
ViCLIP ViT-L/14 [64]CLIP+InternVid200M42.736.7 43.930.2 36.846.9 54.865.427.2
ST-Adapter ViT-L/14 [48]CLIP46.943.0 45.031.2 39.449.4 64.972.329.9
ZeroI2V ViT-L/14CLIP46.541.3 46.831.2 39.347.2 64.670.630.6
", + "image_path": "cb6b9c7fdfa9f43bbfef00e6968ea20beaeba50e93c0cfcb4e644d923731133e.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 141, + 270, + 470, + 323 + ], + "blocks": [ + { + "bbox": [ + 130, + 227, + 482, + 262 + ], + "lines": [ + { + "bbox": [ + 130, + 227, + 482, + 262 + ], + "spans": [ + { + "bbox": [ + 130, + 227, + 482, + 262 + ], + "type": "text", + "content": "Table 8: Inference latency and throughput. All results are obtained using the same V100-32G with PyTorch-built mixed precision, using a batch size of 1 to measure latency and the optimal possible batch size to measure throughput before out of memory." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 141, + 270, + 470, + 323 + ], + "lines": [ + { + "bbox": [ + 141, + 270, + 470, + 323 + ], + "spans": [ + { + "bbox": [ + 141, + 270, + 470, + 323 + ], + "type": "table", + "html": "
ModelViewsGFLOPsLatency (ms)Throughput (V/s)K400 (Top-1)SSv2 (Top-1)
Uniformer-B [28]32×41036245.384.2482.9-
EVL ViT-B/16 [38]8×345453.8724.0482.961.0
ViT-B/16 [12]8×342228.7240.0881.044.0
Zerol2V ViT-B/168×342228.8940.0883.067.7
", + "image_path": "258b6f3c75f632e0c55344aef2e8ccf78c606de2e9424c97386e8a808bb2aec6.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 345, + 482, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 345, + 482, + 417 + ], + "spans": [ + { + "bbox": [ + 130, + 345, + 482, + 417 + ], + "type": "text", + "content": "pre-trained on action recognition datasets such as Kinetics. Instead, we directly applied image-to-video transfer on the AVA dataset. Remarkably, our method still managed to achieve performance on par with full-finetuning methods and self-supervised methods that underwent pre-training using the Kinetics dataset, even when using only 8 frames as input. In summary, our ZeroI2V demonstrates outstanding potential in video tasks beyond recognition." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 436, + 272, + 449 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 436, + 272, + 449 + ], + "spans": [ + { + "bbox": [ + 132, + 436, + 272, + 449 + ], + "type": "text", + "content": "4.4 Few-shot Experiments" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 457, + 482, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 457, + 482, + 529 + ], + "spans": [ + { + "bbox": [ + 130, + 457, + 482, + 529 + ], + "type": "text", + "content": "To demonstrate the adaptation capability of our method in few-shot scenarios, we conduct experiments on the Video Task Adaptation Benchmark (VidTAB). As show in Table 7 The results show that our method can effectively enhance the adaptation of the image model to video tasks using only a few samples. Compared to ST-Adapter [48], our approach achieves comparable results while enjoying the advantage of parameter and inference efficiency." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 548, + 253, + 561 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 548, + 253, + 561 + ], + "spans": [ + { + "bbox": [ + 132, + 548, + 253, + 561 + ], + "type": "text", + "content": "4.5 Efficiency analysis" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 570, + 482, + 666 + ], + "type": "text", + "content": "Comparison of inference efficiency We compared the inference efficiency of our method with other methods on the same hardware device. As shown in Table 8, under comparable accuracy, the throughput of our method is 10 times that of Uniformer [28], Compared to the original ViT-B, our method introduces negligible additional latency during inference while achieving superior performance. In comparison with AVL [38], it can also be seen that the impact of the additional computational module on the actual runtime latency (28.89 ms vs 53.87 ms) is greater than that reflected by GFLOPs (422 vs 454)." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 141, + 171, + 470, + 282 + ], + "blocks": [ + { + "bbox": [ + 130, + 118, + 482, + 162 + ], + "lines": [ + { + "bbox": [ + 130, + 118, + 482, + 162 + ], + "spans": [ + { + "bbox": [ + 130, + 118, + 482, + 162 + ], + "type": "text", + "content": "Table 9: Comparison of training cost. Our results are obtained using the same V100-32G with PyTorch-built mixed precision, following AVL [38]. \"†\" indicates that the epoch is estimated based on the batch size and training steps of the original paper. \"Memory\" refers to the GPU memory usage when the batch size is 8." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 141, + 171, + 470, + 282 + ], + "lines": [ + { + "bbox": [ + 141, + 171, + 470, + 282 + ], + "spans": [ + { + "bbox": [ + 141, + 171, + 470, + 282 + ], + "type": "table", + "html": "
Model (Frames)DatasetTraining EpochsTraining GPU HoursTunable Param (M)Memory (G)Top-1
Uniformer-B [28] (32)K4001105000 × V10050-82.9
ActionCLIP ViT-B/16 [62] (16)K40050480 × RTX3090142-82.6
EVL ViT-B/16 [38] (8)K40053†60 × V100292.282.9
SSv246†75 × V100985.661.0
ST-Adapter ViT-B/16 [48] (8)K40011†23 × V10076.982.0
SSv238†60 × V100147.667.1
AIM ViT-B/16 [71] (8)K40030120 × V100118.783.9
SSv250150 × V100149.066.4
ZeroI2V ViT-B/16 (8)K40040100 × V100147.683.0
SSv25090 × V100147.667.3
", + "image_path": "91509a209cb94e71abfa21d903f3999ff64988b1abcee7a3525056e5d6ef9794.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 302, + 482, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 302, + 482, + 470 + ], + "spans": [ + { + "bbox": [ + 130, + 302, + 482, + 470 + ], + "type": "text", + "content": "Comparison of training cost We compared the training cost of our method with previous methods in Table 9. It can be seen that compared to previous full fine-tuning methods such as Uniformer [28] and ActionCLIP [62], our method significantly reduces training cost. Compared to the previous PETL method, our method does not have a significant advantage in training efficiency due to the use of dense adapters. AVL [38], which does not need to insert adapters into the frozen backbone, avoids some of the cost of backpropagation and therefore has lower memory usage. ST-Adapter [48], due to its fewer trainable parameters, has a faster convergence speed, but its memory usage is close to our method. Nonetheless, in contrast to AIM [71] that imposes an additional computational burden for temporal modeling, our STDHA method, which does not introduce extra learnable parameters, ensures that ZeroI2V maintains superior training efficiency. We believe that it is worthwhile and acceptable to exchange some training costs for a reduction in inference costs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 487, + 227, + 499 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 487, + 227, + 499 + ], + "spans": [ + { + "bbox": [ + 132, + 487, + 227, + 499 + ], + "type": "text", + "content": "5 Conclusions" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 482, + 666 + ], + "type": "text", + "content": "In this work, we present a new approach for parameter-efficient image-to-video transfer learning, called ZeroI2V. By fully leveraging the powerful representational capabilities of pre-trained image models, our approach enables image transformers to perform video tasks without introducing extra costs during inferences. Our proposed STDHA achieves efficient spatial-temporal modeling at zero extra computation and parameters. In addition, through structural reparameterization and full adaptation strategies, we successfully use a linear structure to achieve zero extra inference cost image-to-video adaptation for the first time. ZeroI2V shows strong performance compared to previous full fine-tuning and PETL methods on widely used video understanding benchmarks while maintaining parameter and inference efficiency. Due to the simplicity and versatility of our method, we believe it can be easily extended to other video tasks and even multi-modal understanding tasks. We will further investigate this direction in future work." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 101 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 175 + ], + "type": "text", + "content": "Acknowledgements. This work is supported by the National Key R&D Program of China (No. 2022ZD0160900), the National Natural Science Foundation of China (No. 62076119, No. 61921006), the Fundamental Research Funds for the Central Universities (No. 020214380119), and the Collaborative Innovation Center of Novel Software Technology and Industrialization." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 193, + 197, + 205 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 193, + 197, + 205 + ], + "spans": [ + { + "bbox": [ + 133, + 193, + 197, + 205 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 217, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 138, + 217, + 481, + 240 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 217, + 481, + 240 + ], + "spans": [ + { + "bbox": [ + 138, + 217, + 481, + 240 + ], + "type": "text", + "content": "1. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lucic, M., Schmid, C.: Vivit: A video vision transformer. In: Int. Conf. Comput. Vis. pp. 6816-6826 (2021)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 240, + 481, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 240, + 481, + 262 + ], + "spans": [ + { + "bbox": [ + 138, + 240, + 481, + 262 + ], + "type": "text", + "content": "2. Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 262, + 481, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 262, + 481, + 283 + ], + "spans": [ + { + "bbox": [ + 138, + 262, + 481, + 283 + ], + "type": "text", + "content": "3. Bao, H., Dong, L., Piao, S., Wei, F.: Beit: BERT pre-training of image transformers. In: Int. Conf. Learn. Represent. (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 284, + 481, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 284, + 481, + 304 + ], + "spans": [ + { + "bbox": [ + 138, + 284, + 481, + 304 + ], + "type": "text", + "content": "4. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? In: Int. Conf. Mach. Learn. vol. 139, pp. 813-824 (2021)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 305, + 481, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 305, + 481, + 338 + ], + "spans": [ + { + "bbox": [ + 138, + 305, + 481, + 338 + ], + "type": "text", + "content": "5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: Adv. Neural Inform. Process. Syst. vol. 33, pp. 1877-1901 (2020)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 339, + 481, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 339, + 481, + 371 + ], + "spans": [ + { + "bbox": [ + 138, + 339, + 481, + 371 + ], + "type": "text", + "content": "6. Bulat, A., Pérez-Rúa, J., Sudhakaran, S., Martínez, B., Tzimiropoulos, G.: Spacetime mixing attention for video transformer. In: Adv. Neural Inform. Process. Syst. pp. 19594-19607 (2021)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 371, + 481, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 371, + 481, + 403 + ], + "spans": [ + { + "bbox": [ + 138, + 371, + 481, + 403 + ], + "type": "text", + "content": "7. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A.: Emerging properties in self-supervised vision transformers. In: Int. Conf. Comput. Vis. pp. 9630-9640 (2021)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 403, + 481, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 403, + 481, + 425 + ], + "spans": [ + { + "bbox": [ + 138, + 403, + 481, + 425 + ], + "type": "text", + "content": "8. Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the kinetics dataset. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4724-4733 (2017)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 425, + 481, + 457 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 425, + 481, + 457 + ], + "spans": [ + { + "bbox": [ + 138, + 425, + 481, + 457 + ], + "type": "text", + "content": "9. Chen, S., Ge, C., Tong, Z., Wang, J., Song, Y., Wang, J., Luo, P.: Adaptformer: Adapting vision transformers for scalable visual recognition. In: Adv. Neural Inform. Process. Syst. (2022)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 458, + 481, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 458, + 481, + 491 + ], + "spans": [ + { + "bbox": [ + 138, + 458, + 481, + 491 + ], + "type": "text", + "content": "0. Cherti, M., Beaumont, R., Wightman, R., Wortsman, M., Ilharco, G., Gordon, C., Schuhmann, C., Schmidt, L., Jitsev, J.: Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143 (2022)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 491, + 481, + 524 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 491, + 481, + 524 + ], + "spans": [ + { + "bbox": [ + 138, + 491, + 481, + 524 + ], + "type": "text", + "content": "1. Devlin, J., Chang, M., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT. pp. 4171-4186 (2019)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 524, + 481, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 524, + 481, + 567 + ], + "spans": [ + { + "bbox": [ + 138, + 524, + 481, + 567 + ], + "type": "text", + "content": "2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Int. Conf. Learn. Represent. (2021)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 567, + 481, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 567, + 481, + 589 + ], + "spans": [ + { + "bbox": [ + 138, + 567, + 481, + 589 + ], + "type": "text", + "content": "3. Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscale vision transformers. In: Int. Conf. Comput. Vis. pp. 6804-6815 (2021)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 589, + 481, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 589, + 481, + 611 + ], + "spans": [ + { + "bbox": [ + 138, + 589, + 481, + 611 + ], + "type": "text", + "content": "4. Feichtenhofer, C.: X3D: expanding architectures for efficient video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 200-210 (2020)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 611, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 611, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 138, + 611, + 481, + 632 + ], + "type": "text", + "content": "5. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: Int. Conf. Comput. Vis. pp. 6201-6210 (2019)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 632, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 632, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 632, + 481, + 665 + ], + "type": "text", + "content": "6. Goyal, R., Kahou, S.E., Michalski, V., Materzynska, J., Westphal, S., Kim, H., Haenel, V., Fründ, I., Yianilos, P., Mueller-Freitag, M., Hoppe, F., Thurau, C., Bax, I., Memisevic, R.: The \"something something\" video database for learning" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 117, + 482, + 665 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 149, + 117, + 481, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 117, + 481, + 138 + ], + "spans": [ + { + "bbox": [ + 149, + 117, + 481, + 138 + ], + "type": "text", + "content": "and evaluating visual common sense. In: Int. Conf. Comput. Vis. pp. 5843-5851. IEEE Computer Society (2017)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 140, + 482, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 140, + 482, + 183 + ], + "spans": [ + { + "bbox": [ + 133, + 140, + 482, + 183 + ], + "type": "text", + "content": "17. Gu, C., Sun, C., Ross, D.A., Vondrick, C., Pantofaru, C., Li, Y., Vijayanarasimhan, S., Toderici, G., Ricco, S., Sukthankar, R., et al.: Ava: A video dataset of spatiotemporally localized atomic visual actions. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 6047-6056 (2018)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 184, + 482, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 184, + 482, + 217 + ], + "spans": [ + { + "bbox": [ + 133, + 184, + 482, + 217 + ], + "type": "text", + "content": "18. He, K., Chen, X., Xie, S., Li, Y., Dollar, P., Girshick, R.B.: Masked autoencoders are scalable vision learners. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 15979-15988 (2022)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 218, + 482, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 218, + 482, + 251 + ], + "spans": [ + { + "bbox": [ + 132, + 218, + 482, + 251 + ], + "type": "text", + "content": "19. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 9726-9735 (2020)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 251, + 481, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 251, + 481, + 274 + ], + "spans": [ + { + "bbox": [ + 132, + 251, + 481, + 274 + ], + "type": "text", + "content": "20. He, X., Li, C., Zhang, P., Yang, J., Wang, X.E.: Parameter-efficient model adaptation for vision transformers. arXiv preprint arXiv:2203.16329 (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 274, + 481, + 307 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 274, + 481, + 307 + ], + "spans": [ + { + "bbox": [ + 132, + 274, + 481, + 307 + ], + "type": "text", + "content": "21. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., de Laroussilhe, Q., Gesmundo, A., Attariyan, M., Gelly, S.: Parameter-efficient transfer learning for NLP. In: Int. Conf. Mach. Learn. vol. 97, pp. 2790-2799 (2019)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 308, + 481, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 308, + 481, + 340 + ], + "spans": [ + { + "bbox": [ + 132, + 308, + 481, + 340 + ], + "type": "text", + "content": "22. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: Lora: Low-rank adaptation of large language models. In: Int. Conf. Learn. Represent. (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 341, + 481, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 341, + 481, + 363 + ], + "spans": [ + { + "bbox": [ + 132, + 341, + 481, + 363 + ], + "type": "text", + "content": "23. Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: Eur. Conf. Comput. Vis. pp. 709-727 (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 364, + 481, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 364, + 481, + 396 + ], + "spans": [ + { + "bbox": [ + 132, + 364, + 481, + 396 + ], + "type": "text", + "content": "24. Ju, C., Han, T., Zheng, K., Zhang, Y., Xie, W.: Prompting visual-language models for efficient video understanding. In: Eur. Conf. Comput. Vis. pp. 105-124. Springer (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 397, + 481, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 397, + 481, + 430 + ], + "spans": [ + { + "bbox": [ + 132, + 397, + 481, + 430 + ], + "type": "text", + "content": "25. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: Hmdb: a large video database for human motion recognition. In: Int. Conf. Comput. Vis. pp. 2556-2563. IEEE (2011)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 431, + 481, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 431, + 481, + 464 + ], + "spans": [ + { + "bbox": [ + 132, + 431, + 481, + 464 + ], + "type": "text", + "content": "26. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045-3059 (2021)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 464, + 481, + 497 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 464, + 481, + 497 + ], + "spans": [ + { + "bbox": [ + 132, + 464, + 481, + 497 + ], + "type": "text", + "content": "27. Li, J., Li, D., Xiong, C., Hoi, S.C.H.: BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In: Int. Conf. Mach. Learn. vol. 162, pp. 12888-12900 (2022)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 498, + 481, + 530 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 498, + 481, + 530 + ], + "spans": [ + { + "bbox": [ + 132, + 498, + 481, + 530 + ], + "type": "text", + "content": "28. Li, K., Wang, Y., Gao, P., Song, G., Liu, Y., Li, H., Qiao, Y.: Uniformer: Unified transformer for efficient spatial-temporal representation learning. In: Int. Conf. Learn. Represent. (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 532, + 481, + 564 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 532, + 481, + 564 + ], + "spans": [ + { + "bbox": [ + 132, + 532, + 481, + 564 + ], + "type": "text", + "content": "29. Li, K., Wang, Y., He, Y., Li, Y., Wang, Y., Wang, L., Qiao, Y.: Uniformerv2: Unlocking the potential of image vits for video understanding. In: Int. Conf. Comput. Vis. pp. 1632-1643 (2023)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 566, + 481, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 566, + 481, + 587 + ], + "spans": [ + { + "bbox": [ + 132, + 566, + 481, + 587 + ], + "type": "text", + "content": "30. Li, T., Wang, L.: Learning spatiotemporal features via video and text pair discrimination. arXiv preprint arXiv:2001.05691 (2020)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 588, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 588, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 132, + 588, + 481, + 632 + ], + "type": "text", + "content": "31. Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582-4597 (2021)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 632, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 632, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 632, + 481, + 665 + ], + "type": "text", + "content": "32. Li, X., Huang, Z., Wang, J., Li, K., Wang, L.: Videoeval: Comprehensive benchmark suite for low-cost evaluation of video foundation model. arXiv preprint arXiv:2407.06491 (2024)" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 665 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "text", + "content": "33. Li, Y., Ji, B., Shi, X., Zhang, J., Kang, B., Wang, L.: TEA: temporal excitation and aggregation for action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 906-915 (2020)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 150, + 482, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 150, + 482, + 182 + ], + "spans": [ + { + "bbox": [ + 130, + 150, + 482, + 182 + ], + "type": "text", + "content": "34. Li, Y., Wu, C., Fan, H., Mangalam, K., Xiong, B., Malik, J., Feichtenhofer, C.: Mvitv2: Improved multiscale vision transformers for classification and detection. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 4794-4804 (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 183, + 481, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 183, + 481, + 205 + ], + "spans": [ + { + "bbox": [ + 132, + 183, + 481, + 205 + ], + "type": "text", + "content": "35. Li, Y., Li, Y., Vasconcelos, N.: Resound: Towards action recognition without representation bias. In: Eur. Conf. Comput. Vis. pp. 513-528 (2018)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 205, + 481, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 205, + 481, + 226 + ], + "spans": [ + { + "bbox": [ + 132, + 205, + 481, + 226 + ], + "type": "text", + "content": "36. Lian, D., Zhou, D., Feng, J., Wang, X.: Scaling & shifting your features: A new baseline for efficient model tuning. In: Adv. Neural Inform. Process. Syst. (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 227, + 481, + 258 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 227, + 481, + 258 + ], + "spans": [ + { + "bbox": [ + 132, + 227, + 481, + 258 + ], + "type": "text", + "content": "37. Lin, J., Gan, C., Wang, K., Han, S.: TSM: temporal shift module for efficient and scalable video understanding on edge devices. IEEE Trans. Pattern Anal. Mach. Intell. 44(5), 2760-2774 (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 259, + 481, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 259, + 481, + 293 + ], + "spans": [ + { + "bbox": [ + 132, + 259, + 481, + 293 + ], + "type": "text", + "content": "38. Lin, Z., Geng, S., Zhang, R., Gao, P., de Melo, G., Wang, X., Dai, J., Qiao, Y., Li, H.: Frozen CLIP models are efficient video learners. In: Eur. Conf. Comput. Vis. vol. 13695, pp. 388-404 (2022)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 293, + 481, + 314 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 293, + 481, + 314 + ], + "spans": [ + { + "bbox": [ + 132, + 293, + 481, + 314 + ], + "type": "text", + "content": "39. Liu, M., Wang, Z., Ji, S.: Non-local graph neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 10270-10276 (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 133, + 315, + 481, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 315, + 481, + 346 + ], + "spans": [ + { + "bbox": [ + 133, + 315, + 481, + 346 + ], + "type": "text", + "content": "40. Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., Wei, F., Guo, B.: Swin transformer V2: scaling up capacity and resolution. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 11999-12009 (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 133, + 347, + 481, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 347, + 481, + 369 + ], + "spans": [ + { + "bbox": [ + 133, + 347, + 481, + 369 + ], + "type": "text", + "content": "41. Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., Hu, H.: Video swim transformer. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3192-3201 (2022)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 133, + 369, + 481, + 391 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 369, + 481, + 391 + ], + "spans": [ + { + "bbox": [ + 133, + 369, + 481, + 391 + ], + "type": "text", + "content": "42. Liu, Z., Wang, L., Wu, W., Qian, C., Lu, T.: TAM: temporal adaptive module for video recognition. In: Int. Conf. Comput. Vis. pp. 13688-13698 (2021)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 133, + 392, + 481, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 392, + 481, + 423 + ], + "spans": [ + { + "bbox": [ + 133, + 392, + 481, + 423 + ], + "type": "text", + "content": "43. Lu, C., Jin, X., Huang, Z., Hou, Q., Cheng, M., Feng, J.: CMAE-V: contrastive masked autoencoders for video action recognition. arXiv preprint arXiv:2301.06018 (2023)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 133, + 424, + 481, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 424, + 481, + 445 + ], + "spans": [ + { + "bbox": [ + 133, + 424, + 481, + 445 + ], + "type": "text", + "content": "44. Michel, P., Levy, O., Neubig, G.: Are sixteen heads really better than one? In: Adv. Neural Inform. Process. Syst. pp. 14014-14024 (2019)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 133, + 447, + 481, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 447, + 481, + 479 + ], + "spans": [ + { + "bbox": [ + 133, + 447, + 481, + 479 + ], + "type": "text", + "content": "45. Ni, B., Peng, H., Chen, M., Zhang, S., Meng, G., Fu, J., Xiang, S., Ling, H.: Expanding language-image pretrained models for general video recognition. In: Eur. Conf. Comput. Vis. vol. 13664, pp. 1-18 (2022)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 133, + 479, + 481, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 479, + 481, + 511 + ], + "spans": [ + { + "bbox": [ + 133, + 479, + 481, + 511 + ], + "type": "text", + "content": "46. Nie, X., Ni, B., Chang, J., Meng, G., Huo, C., Zhang, Z., Xiang, S., Tian, Q., Pan, C.: Pro-tuning: Unified prompt tuning for vision tasks. arXiv preprint arXiv:2207.14381 (2022)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 133, + 512, + 481, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 512, + 481, + 577 + ], + "spans": [ + { + "bbox": [ + 133, + 512, + 481, + 577 + ], + "type": "text", + "content": "47. Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., Assran, M., Ballas, N., Galuba, W., Howes, R., Huang, P., Li, S., Misra, I., Rabbat, M.G., Sharma, V., Synnaeve, G., Xu, H., Jégou, H., Mairal, J., Labatut, P., Joulin, A., Bojanowski, P.: Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 (2023)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 133, + 578, + 481, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 578, + 481, + 599 + ], + "spans": [ + { + "bbox": [ + 133, + 578, + 481, + 599 + ], + "type": "text", + "content": "48. Pan, J., Lin, Z., Zhu, X., Shao, J., Li, H.: St-adapter: Parameter-efficient image-to-video transfer learning. In: Adv. Neural Inform. Process. Syst. (2022)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 133, + 600, + 481, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 600, + 481, + 643 + ], + "spans": [ + { + "bbox": [ + 133, + 600, + 481, + 643 + ], + "type": "text", + "content": "49. Pfeiffer, J., Kamath, A., Rückle, A., Cho, K., Gurevych, I.: Adapterfusion: Nondestructive task composition for transfer learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. pp. 487-503 (2021)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 133, + 644, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 644, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 644, + 481, + 665 + ], + "type": "text", + "content": "50. Pfeiffer, J., Rückle, A., Poth, C., Kamath, A., Vulic, I., Ruder, S., Cho, K., Gurevych, I.: Adapterhub: A framework for adapting transformers. In: Proceedings of the" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 147, + 116, + 481, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 116, + 481, + 139 + ], + "spans": [ + { + "bbox": [ + 147, + 116, + 481, + 139 + ], + "type": "text", + "content": "2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 46-54 (2020)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 140, + 481, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 140, + 481, + 183 + ], + "spans": [ + { + "bbox": [ + 133, + 140, + 481, + 183 + ], + "type": "text", + "content": "51. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: Int. Conf. Mach. Learn. vol. 139, pp. 8748-8763 (2021)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 184, + 481, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 184, + 481, + 206 + ], + "spans": [ + { + "bbox": [ + 132, + 184, + 481, + 206 + ], + "type": "text", + "content": "52. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training. OpenAI blog (2018)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 207, + 481, + 229 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 207, + 481, + 229 + ], + "spans": [ + { + "bbox": [ + 132, + 207, + 481, + 229 + ], + "type": "text", + "content": "53. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 230, + 481, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 230, + 481, + 251 + ], + "spans": [ + { + "bbox": [ + 132, + 230, + 481, + 251 + ], + "type": "text", + "content": "54. Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 132, + 252, + 481, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 252, + 481, + 274 + ], + "spans": [ + { + "bbox": [ + 132, + 252, + 481, + 274 + ], + "type": "text", + "content": "55. Tan, J., Zhao, X., Shi, X., Kang, B., Wang, L.: Pointtad: Multi-label temporal action detection with learnable query points. NIPS 35, 15268-15280 (2022)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 275, + 481, + 307 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 275, + 481, + 307 + ], + "spans": [ + { + "bbox": [ + 132, + 275, + 481, + 307 + ], + "type": "text", + "content": "56. Tong, Z., Song, Y., Wang, J., Wang, L.: Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In: Adv. Neural Inform. Process. Syst. (2022)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 308, + 481, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 308, + 481, + 330 + ], + "spans": [ + { + "bbox": [ + 132, + 308, + 481, + 330 + ], + "type": "text", + "content": "57. Tschannen, M., Mustafa, B., Houlsby, N.: Clippo: Image-and-language understanding from pixels only. arXiv preprint arXiv:2212.08045 (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 331, + 481, + 353 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 331, + 481, + 353 + ], + "spans": [ + { + "bbox": [ + 132, + 331, + 481, + 353 + ], + "type": "text", + "content": "58. Tu, S., Dai, Q., Wu, Z., Cheng, Z., Hu, H., Jiang, Y.: Implicit temporal modeling with learnable alignment for video recognition. In: Int. Conf. Comput. Vis. (2023)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 354, + 481, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 354, + 481, + 386 + ], + "spans": [ + { + "bbox": [ + 132, + 354, + 481, + 386 + ], + "type": "text", + "content": "59. Wang, L., Huang, B., Zhao, Z., Tong, Z., He, Y., Wang, Y., Wang, Y., Qiao, Y.: Videomae V2: scaling video masked autoencoders with dual masking. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 387, + 481, + 419 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 387, + 481, + 419 + ], + "spans": [ + { + "bbox": [ + 132, + 387, + 481, + 419 + ], + "type": "text", + "content": "60. Wang, L., Tong, Z., Ji, B., Wu, G.: TDN: temporal difference networks for efficient action recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1895-1904 (2021)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 132, + 420, + 481, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 420, + 481, + 453 + ], + "spans": [ + { + "bbox": [ + 132, + 420, + 481, + 453 + ], + "type": "text", + "content": "61. Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Gool, L.V.: Temporal segment networks: Towards good practices for deep action recognition. In: Eur. Conf. Comput. Vis. vol. 9912, pp. 20-36 (2016)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 132, + 454, + 481, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 454, + 481, + 475 + ], + "spans": [ + { + "bbox": [ + 132, + 454, + 481, + 475 + ], + "type": "text", + "content": "62. Wang, M., Xing, J., Liu, Y.: Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 132, + 476, + 481, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 476, + 481, + 509 + ], + "spans": [ + { + "bbox": [ + 132, + 476, + 481, + 509 + ], + "type": "text", + "content": "63. Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Jiang, Y., Zhou, L., Yuan, L.: BEVT: BERT pretraining of video transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 14713-14723 (2022)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 132, + 510, + 481, + 542 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 510, + 481, + 542 + ], + "spans": [ + { + "bbox": [ + 132, + 510, + 481, + 542 + ], + "type": "text", + "content": "64. Wang, Y., He, Y., Li, Y., Li, K., Yu, J., Ma, X., Li, X., Chen, G., Chen, X., Wang, Y., et al.: Intervid: A large-scale video-text dataset for multimodal understanding and generation. In: ICLR (2024)" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 132, + 544, + 481, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 544, + 481, + 565 + ], + "spans": [ + { + "bbox": [ + 132, + 544, + 481, + 565 + ], + "type": "text", + "content": "65. Wu, W., Sun, Z., Ouyang, W.: Revisiting classifier: Transferring vision-language models for video recognition. In: AAAI Conf. Artif. Intell. pp. 2847-2855 (2023)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 132, + 566, + 481, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 566, + 481, + 598 + ], + "spans": [ + { + "bbox": [ + 132, + 566, + 481, + 598 + ], + "type": "text", + "content": "66. Xiang, W., Li, C., Wang, B., Wei, X., Hua, X., Zhang, L.: Spatiotemporal self-attention modeling with temporal patch shift for action recognition. In: Eur. Conf. Comput. Vis. vol. 13663, pp. 627-644 (2022)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 132, + 599, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 599, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 132, + 599, + 481, + 632 + ], + "type": "text", + "content": "67. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In: Eur. Conf. Comput. Vis. pp. 305–321 (2018)" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 132, + 633, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 633, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 132, + 633, + 481, + 665 + ], + "type": "text", + "content": "68. Xu, C., Zhu, Y., Shen, H., Chen, B., Liao, Y., Chen, X., Wang, L.: Progressive visual prompt learning with contrastive feature re-formation. arXiv preprint arXiv:2304.08386 (2023)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 212, + 100 + ], + "type": "text", + "content": "X. Li et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 424 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 149 + ], + "type": "text", + "content": "69. Xu, C., Zhu, Y., Zhang, G., Shen, H., Liao, Y., Chen, X., Wu, G., Wang, L.: Dpl: Decoupled prompt learning for vision-language models. arXiv preprint arXiv:2308.10061 (2023)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 150, + 482, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 150, + 482, + 183 + ], + "spans": [ + { + "bbox": [ + 132, + 150, + 482, + 183 + ], + "type": "text", + "content": "70. Yan, S., Xiong, X., Arnab, A., Lu, Z., Zhang, M., Sun, C., Schmid, C.: Multiview transformers for video recognition. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 3323-3333 (2022)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 183, + 481, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 183, + 481, + 205 + ], + "spans": [ + { + "bbox": [ + 132, + 183, + 481, + 205 + ], + "type": "text", + "content": "71. Yang, T., Zhu, Y., Xie, Y., Zhang, A., Chen, C., Li, M.: Aim: Adapting image models for efficient video action recognition. In: Int. Conf. Learn. Represent. (2023)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 205, + 481, + 247 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 205, + 481, + 247 + ], + "spans": [ + { + "bbox": [ + 133, + 205, + 481, + 247 + ], + "type": "text", + "content": "72. Zaken, E.B., Goldberg, Y., Ravfogel, S.: Bitfit: Simple parameter-efficient fin-tuning for transformer-based masked language-models. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 1-9 (2022)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 249, + 481, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 249, + 481, + 270 + ], + "spans": [ + { + "bbox": [ + 132, + 249, + 481, + 270 + ], + "type": "text", + "content": "73. Zhai, X., Kolesnikov, A., Houlsby, N., Beyer, L.: Scaling vision transformers. In: IEEE Conf. Comput. Vis. Pattern Recog. pp. 1204-1213 (2022)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 133, + 271, + 481, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 271, + 481, + 303 + ], + "spans": [ + { + "bbox": [ + 133, + 271, + 481, + 303 + ], + "type": "text", + "content": "74. Zhang, G., Zhu, Y., Wang, H., Chen, Y., Wu, G., Wang, L.: Extracting motion and appearance via inter-frame attention for efficient video frame interpolation. In: IEEE Conf. Comput. Vis. Pattern Recog. (2023)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 304, + 481, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 304, + 481, + 325 + ], + "spans": [ + { + "bbox": [ + 132, + 304, + 481, + 325 + ], + "type": "text", + "content": "75. Zhang, H., Hao, Y., Ngo, C.: Token shift transformer for video classification. In: ACM Int. Conf. Multimedia. pp. 917-925 (2021)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 325, + 481, + 347 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 325, + 481, + 347 + ], + "spans": [ + { + "bbox": [ + 132, + 325, + 481, + 347 + ], + "type": "text", + "content": "76. Zhang, Y., Zhou, K., Liu, Z.: Neural prompt search. arXiv preprint arXiv:2206.04673 (2022)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 348, + 481, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 348, + 481, + 369 + ], + "spans": [ + { + "bbox": [ + 132, + 348, + 481, + 369 + ], + "type": "text", + "content": "77. Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Eur. Conf. Comput. Vis. vol. 11205, pp. 831-846 (2018)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 133, + 370, + 481, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 370, + 481, + 402 + ], + "spans": [ + { + "bbox": [ + 133, + 370, + 481, + 402 + ], + "type": "text", + "content": "78. Zhu, Y., Ji, Y., Zhao, Z., Wu, G., Wang, L.: Awt: Transferring vision-language models via augmentation, weighting, and transportation. arXiv preprint arXiv:2407.04603 (2024)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 402, + 481, + 424 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 402, + 481, + 424 + ], + "spans": [ + { + "bbox": [ + 132, + 402, + 481, + 424 + ], + "type": "text", + "content": "79. Zhu, Y., Zhang, G., Tan, J., Wu, G., Wang, L.: Dual detrs for multi-label temporal action detection. In: CVPR. pp. 18559-18569 (2024)" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "spans": [ + { + "bbox": [ + 413, + 91, + 447, + 100 + ], + "type": "text", + "content": "ZeroI2V" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_content_list.json b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4e80d429438a2e5c2520999c23a9bf93eb2c908a --- /dev/null +++ b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_content_list.json @@ -0,0 +1,2016 @@ +[ + { + "type": "text", + "text": "Vincent Tao Hu, Stefan Andreas Baumann, Ming Gui, Olga Grebenkova, Pingchuan Ma, Johannes Fischer, and Björn Ommer", + "bbox": [ + 243, + 231, + 756, + 262 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "CompVis @ LMU Munich, MCML https://compvis.github.io/zigma/", + "bbox": [ + 383, + 273, + 617, + 301 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract The diffusion model has long been plagued by scalability and quadratic complexity issues, especially within transformer-based structures. In this study, we aim to leverage the long sequence modeling capability of a State-Space Model called Mamba to extend its applicability to visual data generation. Firstly, we identify a critical oversight in most current Mamba-based vision methods, namely the lack of consideration for spatial continuity in the scan scheme of Mamba. Secondly, building upon this insight, we introduce Zigzag Mamba, a simple, plug-and-play, minimal-parameter burden, DiT style solution, which outperforms Mamba-based baselines and demonstrates improved speed and memory utilization compared to transformer-based baselines, also this heterogeneous layerwise scan enables zero memory and speed burden when we consider more scan paths. Lastly, we integrate Zigzag Mamba with the Stochastic Interpolant framework to investigate the scalability of the model on large-resolution visual datasets, such as FacesHQ $1024 \\times 1024$ and UCF101, MultiModal-CelebA-HQ, and MS COCO $256 \\times 256$ .", + "bbox": [ + 261, + 345, + 743, + 566 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Diffusion Model $\\cdot$ State-Space Model $\\cdot$ Stochastic Interpolants", + "bbox": [ + 259, + 580, + 740, + 609 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 215, + 638, + 375, + 652 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Diffusion models have demonstrated significant advancements across various applications, including image processing [45, 48, 84], video analysis [44], point cloud processing [109], representation learning [30] and human pose estimation [32]. Many of these models are built upon Latent Diffusion Models (LDM) [84], which are typically based on the UNet backbone. However, scalability remains a significant challenge in LDMs [50]. Recently, transformer-based structures have gained popularity due to their scalability [9, 80] and effectiveness in multi-modal training [10]. Notably, the transformer-based structure DiT [80] has even contributed to enhancing the high-fidelity video generation model SORA [78] by OpenAI. Despite efforts to alleviate the quadratic complexity of the attention mechanism through techniques such as windowing [71], sliding [13], sparsification [19, 56],", + "bbox": [ + 212, + 672, + 787, + 843 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "#", + "bbox": [ + 220, + 143, + 272, + 181 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "ZigMa: A DiT-style Zigzag Mamba Diffusion Model", + "bbox": [ + 272, + 159, + 782, + 200 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "- hashing [20, 93], Ring Attention [15, 66], Flash Attention [23] or a combination of them [8, 124], it remains a bottleneck for diffusion models.", + "bbox": [ + 212, + 145, + 782, + 175 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "On the other hand, State-Space Models [34, 35, 39] have demonstrated significant potential for long sequence modeling, rivaling transformer-based methods. Their biological similarity [95] and efficient memory state also advocate for the use of the State-Space model over the transformer. Several methods [29, 33, 35, 88] have been proposed to enhance the robustness [116], scalability [33], and efficiency [35, 36] of State-Space Models. Among these, a method called Mamba [33] aims to alleviate these issues through work-efficient parallel scanning and other data-dependent innovations. However, the advantage of Mamba lies in 1D sequence modeling, and extending it to 2D images is a challenging question. Previous works [70, 123] have proposed flattening 2D tokens directly by computer hierarchy such as row-and-column-major order, but this approach neglects Spatial Continuity, as shown in Figure 1. Other works [67, 73] consider various directions in a single Mamba block, but this introduces additional parameters and GPU memory burden. In this paper, we aim to emphasize the importance of Spatial Continuity in Mamba and propose several intuitive and simple methods to enable the application of Mamba blocks to 2D images by incorporating continuity-based inductive biases in images. We also generalize these methods to 3D with spatial-temporal factorization on 3D sequence.", + "bbox": [ + 212, + 176, + 787, + 448 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In the end, Stochastic Interpolant [3] provides a more generalized framework that can uniform various generative models including, Normalizing Flow [17], diffusion model [43,89,91], Flow matching [4,64,69], and Schrödinger Bridge [65]. Previously, some works [74] explore the Stochastic Interpolant on relatively small resolutions, e.g., $256 \\times 256$ , $512 \\times 512$ . In this work, we aim to explore it in further more complex scenarios e.g., $1024 \\times 1024$ resolution and even in videos.", + "bbox": [ + 212, + 449, + 787, + 539 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our contributions are as follows: Firstly, we identify the critical issue of Spatial Continuity in generalizing the Mamba block from 1D sequence modeling to 2D image and 3D video modeling. Building on this insight, we propose a simple, plug-and-play, zero-parameter heterogeneous layerwise scan paradigm named Zigzag Mamba (ZigMa) that leverages spatial continuity to maximally incorporate the inductive bias from visual data. Secondly, we extend the methodology from 2D to 3D by factorizing the spatial and temporal sequences to optimize performance. Secondly, we provide comprehensive analysis surrounding the Mamba block within the regime of diffusion models. Lastly, we demonstrate that our designed Zigzag Mamba outperforms related Mamba-based baselines, representing the first exploration of Stochastic Interpolants on large-scale image data $(1024\\times 1024)$ and videos.", + "bbox": [ + 212, + 539, + 787, + 720 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Works", + "text_level": 1, + "bbox": [ + 215, + 744, + 397, + 762 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Mamba. Several works [102, 103, 103] have demonstrated that the State-Space Model possesses universal approximation ability under certain conditions. Mamba, as a new State-Space Model, has superior potential for modeling long sequences efficiently, which has been explored in various fields such as medical imag-", + "bbox": [ + 212, + 779, + 792, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 127 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/cac781c348d71fd43da8f1e4c58e7d32975e0218880efb91f784ab995d41237f.jpg", + "image_caption": [ + "Figure 1: Motivation. Our Zigzag Mamba method improves the network's position-awareness by arranging and rearranging the scan path of Mamba in a heuristic manner." + ], + "image_footnote": [], + "bbox": [ + 308, + 145, + 699, + 383 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "ing [73, 86, 108, 111], video [58, 79], image restoration [38, 122], graphs [12], NLP word byte [100], tabular data [2], point clouds [61], human motion [106, 120], multi-task [62] and image generation [27]. Among them, the most related to us are VisionMamba [70, 123], S4ND [77] and Mamba-ND [59]. VisionMamba [70, 123] uses a bidirectional SSM in discriminative tasks which incurs a high computational cost. Our method applies a simple alternative mamba diffusion in generative models. S4ND [77] introduces local convolution into Mamba's reasoning process, moving beyond the use of only 1D data. Mamba-ND [59] takes multi-dimensionality into account in discriminative tasks, making use of various scans within a single block. In contrast, our focus is on distributing scan complexity across every layer of the network, thus maximizing the incorporation of inductive bias from visual data with zero parameter burden. Scan curve is an important direction in SSM, PointMamba [61] is a representative work that employs SSM with space curves (e.g., Hilbert) for point cloud analysis, achieving remarkable performance. In contrast with them, our preliminary results show that the Hilbert curve doesn't work well with our method (see Appendix), while our method can be regarded as the simplest Peano curve. For more information related to Mamba's work, please refer to the survey [105].", + "bbox": [ + 212, + 459, + 787, + 731 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Backbones in Diffusion Models. Diffusion models primarily employ UNet-based [43, 84] and ViT-based [9, 80] backbones. While UNet is known for high memory demands [84], ViT benefits from scalability [18, 24] and multi-modal learning [10]. However, ViT's quadratic complexity limits visual token processing, prompting studies towards mitigating this issue [13, 23, 104]. Our work, inspired by Mamba [33], explores an SSM-based model as a generic diffusion backbone, retaining ViT's modality-agnostic and sequential modeling advantages.", + "bbox": [ + 212, + 734, + 787, + 840 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 128 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 114, + 785, + 127 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Concurrently, DiffSSM [112] concentrates on unconditional and class conditioning within the S4 model [35]. DIS [27] mainly explores the state-space model on a relatively small resolution, which is not the exact focus of our work. Our work significantly differs from theirs as it primarily focuses on the backbone design using the Mamba block and extends it to text conditioning. Furthermore, we apply our method to more complex visual data.", + "bbox": [ + 212, + 145, + 787, + 238 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "SDE and ODE in Diffusion models. The realm of Score-based Generative Models encompasses significant contributions from foundational works such as Score Matching with Langevin Dynamics (SMLD) by Song et al. [90], and the advent of Diffusion Models with Denoising Score Matching (DDPMs) proposed by Ho et al. [43]. These methodologies operate within the framework of Stochastic Differential Equations (SDEs), a concept further refined in the research of Song et al. [91]. Recent research strides, as exemplified by Karras et al. [52] and Lee et al. [57], have showcased the efficacy of employing Ordinary Differential Equation (ODE) samplers for diffusion SDEs, offering significant reductions in sampling costs compared to traditional approaches that entail discretizing diffusion SDEs. Furthermore, within the domain of Flow Matching [64] and Rectified Flow [68], both SMLD and DDPMs emerge as specialized instances under distinct paths of the Probability Flow ODE framework [91], with broad applications in vision [22,28,49], depth [37], human motion [47], even language [46]. These models typically utilize velocity field parameterizations employing the linear interpolant, a concept that finds broader applications in the Stochastic Interpolant framework [3], with subsequent generalizations extending to manifold settings [14]. The SiT model [74] scrutinizes the interplay between interpolation methods in both sampling and training contexts, albeit in the context of smaller resolutions such as $512 \\times 512$ . Our research endeavors to extend these insights to a larger scale, focusing on the generalization capabilities for 2D images of $1024 \\times 1024$ and 3D video data.", + "bbox": [ + 212, + 241, + 789, + 574 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 215, + 619, + 330, + 635 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this section, we begin by providing background information on State-Space Models [34,35,39], with a particular focus on a special case known as Mamba [33]. We then highlight the critical issue of Spatial Continuity within the Mamba framework, and based on this insight, we propose the Zigzag Mamba. This enhancement aims to improve the efficiency of 2D data modeling by incorporating the continuity inductive bias inherent in 2D data. Furthermore, we design a basic cross-attention block upon Mamba block to achieve text-conditioning. Subsequently, we suggest extending this approach to 3D video data by factorizing the model into spatial and temporal dimensions, thereby facilitating the modeling process. Finally, we introduce the theoretical aspects of stochastic interpolation for training and sampling, which underpin our network architecture.", + "bbox": [ + 212, + 672, + 787, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 126 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Background: State-Space Models", + "text_level": 1, + "bbox": [ + 215, + 146, + 532, + 161 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "State Space Models (SSMs) [34, 35, 39] have been proven to handle long-range dependencies theoretically and empirically [36] with linear scaling w.r.t sequence length. In their general form, a linear state space model can be written as follows:", + "bbox": [ + 212, + 171, + 782, + 215 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nx ^ {\\prime} (t) = \\mathbf {A} (t) x (t) + \\mathbf {B} (t) u (t)\n$$\n", + "text_format": "latex", + "bbox": [ + 395, + 226, + 596, + 243 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\ny (t) = \\mathbf {C} (t) x (t) + \\mathbf {D} (t) u (t),\n$$\n", + "text_format": "latex", + "bbox": [ + 403, + 244, + 602, + 263 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "mapping a 1-D input sequence $u(t) \\in \\mathbb{R}$ to a 1-D output sequence $y(t) \\in \\mathbb{R}$ through an implicit N-D latent state sequence $x(t) \\in \\mathbb{R}^n$ . Concretely, deep SSMs seek to use stacks of this simple model in a neural sequence modeling architecture, where the parameters $\\mathbf{A}, \\mathbf{B}, \\mathbf{C}$ and $\\mathbf{D}$ for each layer can be learned via gradient descent.", + "bbox": [ + 212, + 273, + 784, + 349 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/95129ba59dc1054d299e18bed2f1b04a78fcf32e35ea8a48eb4214547258b996.jpg", + "image_caption": [ + "Figure 2: ZigMa. Our backbone is structured in L layers, mirroring the style of DiT [80]. We use the single-scan Mamba block as the primary reasoning module across different patches. To ensure the network is positionally aware, we've designed an arrange-rearrange scheme based on the single-scan Mamba. Different layers follow pairs of unique rearrange operation $\\Omega$ and reverse rearrange $\\bar{\\Omega}$ , optimizing the position-awareness of the method." + ], + "image_footnote": [], + "bbox": [ + 272, + 386, + 725, + 479 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Recently, Mamba [33] largely improved the flexibility of SSMs in Language Modelling by relaxing the time-invariance constraint on SSM parameters, while maintaining computational efficiency. Several studies [70, 123] have been conducted to adapt the use of Mamba from unidimensional language data to multidimensional visual data. While most of these studies try to duplicate the A to facilitate the new (reversed) direction, this approach can lead to additional parameters and an increased memory burden. In this paper, we focus on exploring the scanning scheme of Mamba in diffusion models to efficiently maximize the use of inductive-bias from multi-dimensional visual data with zero parameter and memory burden.", + "bbox": [ + 212, + 611, + 784, + 762 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Diffusion Backbone: Zigzag Mamba", + "text_level": 1, + "bbox": [ + 214, + 784, + 552, + 800 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "DiT-Style Network. We opt to use the framework of DiT by AdaLN [80] rather than the skip-layer focused U-ViT structure [9], as DiT has been validated as a", + "bbox": [ + 212, + 809, + 782, + 839 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "scalable structure in literature [10, 18, 78]. Additionally, the Hourglass structure with downsampling [76, 85] requires selecting the depth and width based on the complexity of the dataset and task. This requirement limits the flexibility of the solution. Considering the aforementioned points, it informs our Mamba network design depicted in Figure 4. The core component of this design is the Zigzag Scanning, which will be explained in the following paragraph.", + "bbox": [ + 212, + 146, + 787, + 236 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Zigzag Scanning in Mamba. Previous studies [101, 112] have used bidirectional scanning within the SSM framework. This approach has been expanded to include additional scanning directions [67, 70, 115] to account for the characteristics of 2D image data. These approaches unfold image patches along four directions, resulting in four distinct sequences. Each of these sequences is subsequently processed together through every SSM. However, since each direction may have different SSM parameters (A, B, C, and D), scaling up the number of directions could potentially lead to memory issues. In this work, we investigate the potential for amortizing the complexity of the Mamba into each layer of the network.", + "bbox": [ + 212, + 237, + 787, + 387 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our approach centers around the concept of token rearrangement before feeding them into the Forward Scan block. For a given input feature $\\mathbf{z}_i$ from layer $i$ , the output feature $\\mathbf{z}_{i + 1}$ of the Forward Scan block after the rearrangement can be expressed as:", + "bbox": [ + 212, + 388, + 785, + 448 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {z} _ {\\Omega_ {i}} = \\operatorname {a r r a n g e} \\left(\\mathbf {z} _ {i}, \\Omega_ {i}\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 419, + 455, + 785, + 472 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\bar {\\mathbf {z}} _ {\\Omega_ {i}} = \\operatorname {s c a n} \\left(\\mathbf {z} _ {\\Omega_ {i}}\\right), \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 419, + 474, + 785, + 491 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {z} _ {i + 1} = \\operatorname {a r r a n g e} \\left(\\bar {\\mathbf {z}} _ {\\Omega_ {i}}, \\bar {\\Omega} _ {i}\\right), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 413, + 493, + 785, + 510 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "$\\varOmega_{i}$ represents the 1D permutation of layer $i$ , which rearranges the order of the patch tokens by $\\varOmega_{i}$ , and $\\varOmega_{i}$ and $\\overline{\\varOmega}_{i}$ represent the reverse operation. This ensures that both $\\mathbf{z}_i$ and $\\mathbf{z}_{i + 1}$ maintain the sample order of the original image tokens.", + "bbox": [ + 214, + 516, + 785, + 561 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/68ff9f5378c491864ca7ac38a50b0592af57a87b66ddb62ea438947a29b71cf3.jpg", + "image_caption": [ + "(a) sweep-scan" + ], + "image_footnote": [], + "bbox": [ + 251, + 599, + 346, + 672 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/a8358b08a142e0575512c0d7f81c41f7bbbbdbc25746dbc00321c40215e479d8.jpg", + "image_caption": [ + "(b) zigzag-scan" + ], + "image_footnote": [], + "bbox": [ + 367, + 599, + 464, + 672 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/b7bc70bf410c52cc5a982594131aa744e0549d0297aa8ce0f55f0e01de0f46b7.jpg", + "image_caption": [ + "Figure 3: The 2D Image Scan. Our mamba scan design is based on the sweep-scan scheme shown in subfigure (a). From this, we developed a zigzag-scan scheme displayed in subfigure (b) to enhance the continuity of the patches, thereby maximizing the potential of the Mamba block. Since there are several possible arrangements for these continuous scans, we have listed the eight most common zigzag-scans in subfigure (c)." + ], + "image_footnote": [], + "bbox": [ + 475, + 584, + 542, + 686 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/7c2665c6dc21713e769f8702090a1137b101d214de89f93f081122e32e9df29e.jpg", + "image_caption": [ + "(c) zigzag-scan with 8 schemes" + ], + "image_footnote": [], + "bbox": [ + 549, + 585, + 614, + 686 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/3ed27162bc2b6fd64e0502efd46623912b5fe90858eac34dd766c386006161c0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 622, + 585, + 689, + 686 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/40ca7ad38f7491b0c7ed4dbe70d27303b4c28ebad0119dceed97432837c25ae0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 697, + 585, + 763, + 686 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Now we explore the design of the $\\Omega_{i}$ operation, considering additional inductive biases from 2D images. We propose one key properties: Spatial Con", + "bbox": [ + 212, + 809, + 785, + 840 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 127 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "tinuity. Regarding Spatial Continuity, current innovations of Mamba in images [67, 70, 123] often squeeze 2D patch tokens directly following the computer hierarchy, such as row-and-column-major order. However, this approach may not be optimal for incorporating the inductive bias with neighboring tokens, as illustrated in Figure 3. To address this, we introduce a novel scanning scheme designed to maintain spatial continuity during the scan process. Additionally, we consider space-filling, which entails that for a patch of size $N \\times N$ , the length of the 1D continuous scanning scheme should be $N^2$ . This helps to efficiently incorporate tokens to maximize the potential of long sequence modeling within the Mamba block.", + "bbox": [ + 212, + 146, + 782, + 295 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Heterogeneous Layerwise Scan. To achieve the aforementioned property, we heuristically design eight possible space-filling continuous schemes $^1$ , denoted as $\\mathbf{S}_j$ (where $j \\in [0,7]$ ), as illustrated in Figure 3. While there may be other conceivable schemes, for simplicity, we limit our usage to these eight. Consequently, the scheme for each layer can be represented as $\\varOmega_{i} = \\mathbf{S}_{\\{i\\% 8\\}}$ , where $\\%$ denotes the modulo operator.", + "bbox": [ + 212, + 297, + 784, + 387 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/992b39739328f0a020ff69bdedff2e60e393a6bf1ae30d78bfdf9f6dbd2ecb16.jpg", + "image_caption": [ + "Figure 4: The Detail of our Zigzag Mamba block. The detail of Mamba Scan is shown in Figure 2. The condition can include a timestep and a text prompt. These are fed into an MLP, which separately modulates the Mamba scan for long sequence modeling and cross-attention for multi-modal reasoning." + ], + "image_footnote": [], + "bbox": [ + 313, + 412, + 687, + 618 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Deploying text-condition on Zigzag Mamba. While Mamba offers the advantage of efficient long sequence modeling, it does so at the expense of the attention mechanism. As a result, there has been limited exploration into incorporating text-conditioning in Mamba-based diffusion models. To address this", + "bbox": [ + 212, + 714, + 782, + 773 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 114, + 784, + 125 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "1 We also experimented with more complex continuous space-filling paths, such as the Hilbert space-filling curve [75]. However, empirical findings indicate that this approach may lead to deteriorated results. For further detailed comparisons, please refer to the Appendix.", + "bbox": [ + 217, + 782, + 782, + 838 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "gap, we propose a straightforward cross-attention block with skip layers built upon the Mamba block, as illustrated in Figure 4. This design not only enables long sequence modeling but also facilitates multi-token conditioning, such as text-conditioning. Furthermore, it has the potential to provide interpretability [16, 42, 94], as cross-attention has been utilized in diffusion models.", + "bbox": [ + 212, + 146, + 782, + 222 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Generalizing to 3D videos by factorizing spatial and temporal information. In previous sections, our focus has been on the spatial 2D Mamba, where we designed several spatially continuous, space-filling 2D scanning schemes. In this section, we aim to leverage this experience to aid in designing corresponding mechanisms for 3D video processing. We commence our design process by extrapolating from the conventional directional Mamba, as depicted in Figure 5. Given a video feature input $\\mathbf{z} \\in \\mathbb{R}^{B \\times T \\times C \\times W \\times H}$ , we propose three variants of the Video Mamba Block for facilitating 3D video generation.", + "bbox": [ + 212, + 223, + 784, + 344 + ], + "page_idx": 7 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) Sweep-scan: In this approach, we directly flatten the 3D feature $\\mathbf{z}$ without considering spatial or temporal continuity. It's worth noting that the flattening process follows the computer hierarchy order, meaning that no continuity is preserved in the flattened representation.", + "(b) 3D Zigzag: Compared with the formulation of the 2D zigzag in previous subsections, we follow the similar design to generalize it to 3D Zigzag to keep the continuity in 2D and 3D simultaneously. Potentially, the scheme has much more complexity. We heuristically list 8 schemes as well. However, we empirically find that this scheme will lead to suboptimal optimization.", + "(c) Factorized 3D Zigzag = 2D Zigzag + 1D Sweep: To address the suboptimal optimization issue, we propose to factorize the spatial and temporal correlations as separate Mamba blocks. The order of their application can be adjusted as desired, for example, \"sstt\" or \"ststst\", where \"s\" represents the spatial-zigzag Mamba and \"t\" represents the temporal-zigzag Mamba. For a 1D temporal sweep, we simply opt for forward and backward scanning, since there is only one dimension on the time axis." + ], + "bbox": [ + 212, + 345, + 785, + 592 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Computation Analysis. For a visual sequence $\\mathbf{T} \\in \\mathbb{R}^{1 \\times M \\times D}$ , the computation complexity of global self-attention and $k$ -direction mamba and our zigzag mamba are as follows:", + "bbox": [ + 212, + 593, + 785, + 640 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\zeta (\\text {s e l f - a t t e n t i o n}) = 4 \\mathrm {M D} ^ {2} + 2 \\mathrm {M} ^ {2} \\mathrm {D}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 346, + 672, + 784, + 689 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\zeta (\\mathrm {k} - \\text {m a m b a}) = k \\times [ 3 \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} + \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} ^ {2} ], \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 346, + 693, + 784, + 710 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\zeta (\\text {z i g z a g}) = 3 \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} + \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} ^ {2}, \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 346, + 714, + 784, + 731 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "where self-attention exhibits quadratic complexity with respect to sequence length M, while Mamba exhibits linear complexity (N is a fixed parameter, set to 16 by default). Here, $k$ represents the number of scan directions in a single Mamba block. Therefore, $k$ -mamba and zigzag share linear complexity with respect to self-attention. Moreover, our zigzag method can eliminate the $k$ series, further reducing the overall complexity.", + "bbox": [ + 212, + 750, + 790, + 840 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 127 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/345ced1a59da24d163dcef484064ca2ed5ecf182938801a4b72ab06f35b5075a.jpg", + "image_caption": [ + "Figure 5: The 3D Video Scan. (a) We illustrate the bidirectional Mamba with the sweep scan, where the spatial and temporal information is treated as a set of tokens with a computer-hierarchy order. (b) For the 3D zigzag-scan, we aim to maximize the potential of Mamba by employing a spatial continuous scan scheme and adopting the optimal zigzag scan solution, as depicted in Figure 3. (c) We further separate the reasoning between spatial and temporal information, resulting in a factorized combination of 2D spatial scan $(\\varOmega)$ plus a 1D temporal scan $(\\varOmega^{\\prime})$ scheme." + ], + "image_footnote": [], + "bbox": [ + 295, + 146, + 699, + 301 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Upon completing the design of the Zigzag Mamba network for improved visual inductive-bias integration, we proceed to combine it with a new diffusion framework, as illustrated below.", + "bbox": [ + 212, + 439, + 784, + 484 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "3.3 Diffusion Framework: Stochastic Interpolant", + "text_level": 1, + "bbox": [ + 214, + 508, + 625, + 523 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Sampling based on vector $\\mathbf{v}$ and score $\\mathbf{s}$ . Following [3, 96], the time-dependent probability distribution $p_t(\\mathbf{x})$ of $\\mathbf{x}_t$ also coincides with the distribution of the reverse-time SDE [6]:", + "bbox": [ + 212, + 532, + 784, + 579 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\nd \\mathbf {X} _ {t} = \\mathbf {v} \\left(\\mathbf {X} _ {t}, t\\right) d t + \\frac {1}{2} w _ {t} \\mathbf {s} \\left(\\mathbf {X} _ {t}, t\\right) d t + \\sqrt {w _ {t}} d \\bar {\\mathbf {W}} _ {t}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 333, + 589, + 785, + 619 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "where $\\bar{\\mathbf{W}}_t$ is a reverse-time Wiener process, $w_{t} > 0$ is an arbitrary time-dependent diffusion coefficient, $\\mathbf{s}(\\mathbf{x},t) = \\nabla \\log p_t(\\mathbf{x})$ is the score, and $\\mathbf{v}(\\mathbf{x},t)$ is given by the conditional expectation", + "bbox": [ + 212, + 630, + 787, + 676 + ], + "page_idx": 8 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathbf {v} (\\mathbf {x}, t) = \\mathbb {E} [ \\dot {\\mathbf {x}} _ {t} | \\mathbf {x} _ {t} = \\mathbf {x} ], \\\\ \\begin{array}{l} \\underline {{- [ - t ] = - t}} \\\\ = \\dot {\\alpha} _ {t} \\mathbb {E} \\left[ \\mathbf {x} _ {*} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right] + \\dot {\\sigma} _ {t} \\mathbb {E} \\left[ \\boldsymbol {\\varepsilon} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right], \\end{array} \\tag {8} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 349, + 686, + 785, + 723 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "where $\\alpha_{t}$ is a decreasing function of $t$ , and $\\sigma_{t}$ is an increasing function of $t$ . Here, $\\dot{\\alpha}_{t}$ and $\\dot{\\sigma}_{t}$ denote the time derivatives of $\\alpha_{t}$ and $\\sigma_{t}$ , respectively.", + "bbox": [ + 212, + 734, + 784, + 763 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "As long as we can estimate the velocity $\\mathbf{v}(\\mathbf{x},t)$ and/or score $\\mathbf{s}(\\mathbf{x},t)$ fields, we can utilize it for the sampling process either by probability flow ODE [91] or the reverse-time SDE (7). Solving the reverse SDE (7) backwards in time from $\\mathbf{X}_T = \\varepsilon \\sim \\mathcal{N}(0,\\mathbf{I})$ enables generating samples from the approximated data distribution $p_0(\\mathbf{x})\\sim p(\\mathbf{x})$ . During sampling, we can perform direct sampling", + "bbox": [ + 212, + 765, + 787, + 840 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 774, + 116, + 785, + 126 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "from either ODE or SDEs to balance between sampling speed and fidelity. If we choose to conduct ODE sampling, we can achieve this simply by setting the noise term $\\mathbf{s}$ to zero.", + "bbox": [ + 212, + 146, + 787, + 190 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In [3], it shows that one of the two quantities $\\mathbf{s}_{\\theta}(\\mathbf{x},t)$ and $\\mathbf{v}_{\\theta}(\\mathbf{x},t)$ needs to be estimated in practice. This follows directly from the constraint", + "bbox": [ + 212, + 191, + 787, + 223 + ], + "page_idx": 9 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathbf {x} = \\mathbb {E} \\left[ \\mathbf {x} _ {t} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right], \\tag {9} \\\\ = \\alpha_ {t} \\mathbb {E} [ \\mathbf {x} _ {*} | \\mathbf {x} _ {t} = \\mathbf {x} ] + \\sigma_ {t} \\mathbb {E} [ \\varepsilon | \\mathbf {x} _ {t} = \\mathbf {x} ], \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 369, + 234, + 785, + 270 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "which can be used to re-express the score $\\mathbf{s}(\\mathbf{x},t)$ in terms of the velocity $\\mathbf{v}(\\mathbf{x},t)$ as", + "bbox": [ + 212, + 281, + 785, + 310 + ], + "page_idx": 9 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {s} (\\mathbf {x}, t) = \\sigma_ {t} ^ {- 1} \\frac {\\alpha_ {t} \\mathbf {v} (\\mathbf {x} , t) - \\dot {\\alpha} _ {t} \\mathbf {x}}{\\dot {\\alpha} _ {t} \\sigma_ {t} - \\alpha_ {t} \\dot {\\sigma} _ {t}}. \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 392, + 321, + 785, + 353 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Thus, $\\mathbf{v}(\\mathbf{x},t)$ and $\\mathbf{s}(\\mathbf{x},t)$ can be mutually conversed. We illustrate how to compute them in the following.", + "bbox": [ + 212, + 363, + 785, + 393 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Estimating the score $\\mathbf{s}$ and the velocity $\\mathbf{v}$ . It has been shown in score-based diffusion models [91] that the score can be estimated parametrically as $\\mathbf{s}_{\\theta}(\\mathbf{x},t)$ using the loss", + "bbox": [ + 212, + 393, + 785, + 438 + ], + "page_idx": 9 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {s}} (\\theta) = \\int_ {0} ^ {T} \\mathbb {E} [ \\| \\sigma_ {t} \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t) + \\varepsilon \\| ^ {2} ] \\mathrm {d} t. \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 370, + 439, + 785, + 474 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Similarly, the velocity $\\mathbf{v}(\\mathbf{x},t)$ can be estimated parametrically as $\\mathbf{v}_{\\theta}(\\mathbf{x},t)$ via the loss", + "bbox": [ + 212, + 479, + 785, + 508 + ], + "page_idx": 9 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} _ {\\mathrm {v}} (\\theta) = \\int_ {0} ^ {T} \\mathbb {E} [ \\| \\mathbf {v} _ {\\theta} (\\mathbf {x} _ {t}, t) - \\dot {\\alpha} _ {t} \\mathbf {x} _ {*} - \\dot {\\sigma} _ {t} \\boldsymbol {\\varepsilon} \\| ^ {2} ] \\mathrm {d} t, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 341, + 518, + 785, + 554 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "where $\\theta$ represents the Zigzag Mamba network that we described in the previous section, we adopt the linear path for training, due to its simplicity and relatively straight trajectory:", + "bbox": [ + 212, + 564, + 785, + 609 + ], + "page_idx": 9 + }, + { + "type": "equation", + "text": "\n$$\n\\alpha_ {t} = 1 - t, \\quad \\sigma_ {t} = t. \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 428, + 609, + 785, + 626 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "We note that any time-dependent weight can be included under the integrals in both (11) and (12). These weight factors play a crucial role in score-based models when $T$ becomes large [54, 55]. Thus, they provide a general form that considers both the time-dependent weight and the stochasticity.", + "bbox": [ + 212, + 633, + 785, + 696 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4 Experiment", + "text_level": 1, + "bbox": [ + 214, + 719, + 367, + 737 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.1 Dataset and Training Detail", + "text_level": 1, + "bbox": [ + 214, + 752, + 495, + 768 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Image Dataset. To explore the scalability in high resolution, we conduct experiments on the FacesHQ $1024 \\times 1024$ . The general dataset that we use for training and ablations is FacesHQ, a compilation of CelebA-HQ [110] and FFHQ [53], as employed in previous work such as [26, 28].", + "bbox": [ + 212, + 779, + 785, + 842 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 127 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/039b2851b49194a91ceadcafad76319c755eb2833a5a395c8dcb64819c471487.jpg", + "table_caption": [ + "Table 1: Ablation of Scanning Scheme Number. We evaluate various zigzag scanning schemes. Starting from a simple \"Sweep\" baseline, we consistently observe improvements as more schemes are implemented." + ], + "table_footnote": [], + "table_body": "
MultiModal-CelebA-256MultiModal-CelebA-512
FID5k ↓FDD5k ↓KID5k ↓FID5k ↓FDD5k ↓KID5k ↓
Sweep158.175.90.169162.3103.20.203
Zigzag-165.747.80.051121.078.00.113
Zigzag-254.745.50.04196.059.50.079
Zigzag-845.526.40.01134.929.50.023
", + "bbox": [ + 230, + 198, + 767, + 301 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Video Dataset. UCF101 dataset consists of 13,320 video clips, which are classified into 101 categories. The total length of these video clips is over 27 hours. All these videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of $320 \\times 240$ . We randomly sample continuous 16 frames and resize the frames to $256 \\times 256$ .", + "bbox": [ + 212, + 328, + 782, + 402 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Training Details. We uniformly use AdamW [72] optimizer with $1e - 4$ learning rate. For extracting latent features, we employ off-the-shelf VAE encoders. To mitigate computational costs, we adopted a mixed-precision training approach. Additionally, we applied gradient clipping with a threshold of 2.0 and a weight decay of 0.01 to prevent NaN occurrences during Mamba training. Most of our experiments were conducted on 4 A100 GPUs, with scalability exploration extended to 16 and 32 A100 GPUs. For sampling, we adopt the ODE sampling for speed consideration. For further details, please refer to the Appendix 8.8.", + "bbox": [ + 212, + 404, + 784, + 525 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.2 Ablation Study", + "text_level": 1, + "bbox": [ + 215, + 547, + 387, + 561 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/cb1be07444cafab03559328172dc24760403757ed13e7c7c7daadab899e761e7.jpg", + "table_caption": [ + "Table 2: Ablation about Position Embedding (PE) on unconditional CelebA dataset $(256^{2})$ . To better abate PE and eliminate the conditional signal's influence, we use an unconditional dataset." + ], + "table_footnote": [], + "table_body": "
FID/FDD ↓No PECosine PELearnable PE
VisionMamba [123]21.33/21.0018.47/19.9016.38/18.20
ZigMa14.27/18.0014.04/17.9113.32/17.40
", + "bbox": [ + 294, + 652, + 702, + 705 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Scan Scheme Ablation. We provide several important findings based on our ablation studies on MultiModal-CelebA dataset in various resolutions in Table 1. Firstly, switching the scanning scheme from sweep to zigzag led to some gains. Secondly, as we increased the zigzag scheme from 1 to 8, we saw consistent gains. This indicates that alternating the scanning scheme in various blocks can be beneficial. Finally, the relative gain between Zigzag-1 and Zigzag-8 is more prominent at higher resolutions ( $512 \\times 512$ , or longer sequence token number)", + "bbox": [ + 212, + 733, + 787, + 840 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 767, + 114, + 782, + 126 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/53bc43796d12c2f7e7e06e43eb902b0f63f951f7460839d0059c4e0db032d056.jpg", + "image_caption": [ + "(a) FPS v.s. Patch Number." + ], + "image_footnote": [], + "bbox": [ + 222, + 147, + 460, + 224 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/6d89946986831453e9d1a17ba75193683e812b5ef23bc2269e03b91b2d2a4f77.jpg", + "image_caption": [ + "(b) GPU Memory v.s. Patch Number." + ], + "image_footnote": [], + "bbox": [ + 514, + 146, + 754, + 224 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/c8c1fbb9e3be3d50e53a759e858d96d168a1a691796565422b0a1d96a507a810.jpg", + "image_caption": [ + "(c) Order Receptive Field v.s. GPU Memory." + ], + "image_footnote": [], + "bbox": [ + 228, + 267, + 467, + 349 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/748520714e1a920464774140e40b899cb47a9374c53f83117e91600e5bb580e3.jpg", + "image_caption": [ + "(d) Order Receptive Field v.s. FPS.", + "Figure 6: (a, b).GPU Memory usage and FPS between our method and transformer-based methods(U-VIT [9] and DiT [80]). (c). Order Receptive Field and GPU memory (d). Order Receptive Field and FPS. Order Receptive Field denotes how many scan paths we consider in our network design." + ], + "image_footnote": [], + "bbox": [ + 524, + 267, + 759, + 349 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "compared to lower resolutions ( $256 \\times 256$ , or shorter sequence token number), this shows the great potential and more efficient inductive-bias incorporation in longer sequence number.", + "bbox": [ + 212, + 464, + 784, + 508 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Ablation about Position Embedding. As shown in Table 2, the learnable embedding performs better than the Sinusoidal embedding, which in turn performs better than no position embedding. In various cases, our zigzag method surpasses the baselines. Notably, our performance remains almost unchanged whether we use the Sinusoidal position embedding or no position embedding. This suggests that our method can better incorporate spatial inductive-bias compared to our baseline. Finally, using the learnable position embedding provides further, albeit marginal, gains suggesting that better position embedding exists even within our zigzag scan scheme. We find that [79] shares the same conclusion as us in video-related tasks.", + "bbox": [ + 212, + 511, + 787, + 660 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Ablation study about the Network and FPS/GPU-Memory. In Figure 6 (a,b), we analyze the forward speed and GPU memory usage while varying the global patch dimensions from $32 \\times 32$ to $196 \\times 196$ . For the speed analysis, we report Frame Per Second (FPS) instead of FLOPS, as FPS provides a more explicit and appropriate evaluation of speed2. For simplicity, we uniformly apply the zigzag-1 Mamba scan scheme and use batch size=1 and patch size=1 on an A100 GPU with 80GB memory. It's worth noting that all methods share nearly identical parameter numbers for fair comparison. We primarily compare our method with two popular transformer-based Diffusion backbones, U-ViT [9] and DiT [80]. It is evident that our method achieves the best FPS and GPU", + "bbox": [ + 212, + 662, + 787, + 814 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 126 + ], + "page_idx": 11 + }, + { + "type": "page_footnote", + "text": "2 https://github.com/state-spaces/mamba/issues/110#issuecomment-1916464012", + "bbox": [ + 217, + 824, + 764, + 839 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "utilization when gradually increasing the patching number. U-ViT demonstrates the worst performance, even exceeds the memory bounds when the patch number is 196. Surprisingly, DiT's GPU utilization is close to our method, which supports our backbone choice of DiT from a practical perspective.", + "bbox": [ + 212, + 146, + 787, + 207 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/4680e9454e521872956e603986c45c474974f695366fbab446c6d986d7f782d0.jpg", + "table_caption": [ + "Table 3: Main result on FacesHQ-1024 dataset with 4,094 tokens in latent space and $\\mathbf{bs} = \\mathbf{512}$ . Our method can outperform the baseline and can achieve even better results when the training scale is increased." + ], + "table_footnote": [], + "table_body": "
MethodFID5k↓FDD5k↓
VisionMamba [123]51.166.3
ZigMa37.850.5
ZigMa bs × 226.631.2
", + "bbox": [ + 222, + 353, + 486, + 421 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/eb6331090f8407c3cff999601c273569dfcedc56f997b1b76806fe12e95073a6.jpg", + "table_caption": [ + "Table 5: Transformer-based methods comparison on unconditional CelebA256." + ], + "table_footnote": [], + "table_body": "
MethodFID↓Memory(G) ↓FLOPS(G) ↓
U-ViT14.5035.1012.5
DiT14.6429.205.5
ZigMa14.2717.805.2
", + "bbox": [ + 225, + 481, + 480, + 536 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/8b2b757f5510637d9e945f15261b0a6600a3876f45f3173c99b4c4189453698b.jpg", + "table_caption": [ + "Table 4: Main Results on MS-COCO dataset with $\\mathrm{bs} = {256}$ . Our method consistently outperforms the baseline. ZigMa with 8 scans performs much better compared with the baseline." + ], + "table_footnote": [], + "table_body": "
MethodFID5k↓
Sweep195.1
Zigzag-173.1
VisionMamba [123]60.2
Zigzag-841.8
", + "bbox": [ + 557, + 345, + 754, + 426 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/58538241742d093dabfc56d6b2729acd18265eceac8e3add2b0a1e3f9368f047.jpg", + "table_caption": [ + "Table 6: Video Scan Scheme on UCF101 dataset with $\\mathrm{bs} = {32}$ ." + ], + "table_footnote": [], + "table_body": "
MethodFrame-FID5k↓FVD5k↓
Bidirection [123]256.1320.2
3D Zigzag238.1282.3
Our216.1210.2
Bidirection [123] bs×4146.2201.1
ZigMa bs×4121.2140.1
", + "bbox": [ + 545, + 468, + 777, + 536 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Order Receptive Field. We propose a new concept in Mamba-based structure for multidimensional data. Given that various spatially-continuous zigzag paths may exist in multidimensional data, we introduce the term Order Receptive Field which denotes the number of zigzag paths explicitly employed in the network design.", + "bbox": [ + 212, + 580, + 787, + 657 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Ablation study about the Order Receptive Field and FPS/GPU-Memory. As depicted in Fig. 6 (c,d), Zigzag Mamba consistently maintains its GPU memory consumption and FPS rate, even with a gradually increasing Order Receptive Field. In contrast, our primary baseline, Parallel Mamba, along with variants like Bidirectional Mamba and Vision Mamba [70, 123], experience a consistent decrease in FPS due to increased parameters. Notably, Zigzag Mamba, with an Order Receptive Field of 8, can perform faster without altering parameters.", + "bbox": [ + 212, + 665, + 805, + 772 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Comparison with transformer-based methods. We show the result in Table 5 on unconditional generation task. Our method achieves performance comparable to Transformer-based methods, with significantly less memory consumption and fewer FLOPS.", + "bbox": [ + 212, + 779, + 787, + 839 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 767, + 114, + 784, + 126 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "4.3 Main Result", + "text_level": 1, + "bbox": [ + 215, + 146, + 364, + 159 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Main Result on $1024 \\times 1024$ FacesHQ. To elaborate on the scalability of our method within the Mamba and Stochastic Interpolant framework, we provide comparisons on a high-resolution dataset ( $1024 \\times 1024$ FacesHQ) in Table 3. Our primary comparison is against Bidirectional Mamba, a commonly used solution for applying Mamba to 2D image data [70, 123]. With the aim of investigating Mamba's scalability in large resolutions up to 1,024, we employ the diffusion model on the latent space of $128 \\times 128$ with a patch size of 2, resulting in 4,096 tokens. The network is trained on 16 A100 GPUs. Notably, our method demonstrates superior results compared to Bidirectional Mamba. Details regarding loss, FID curves, and visualization can be found in the Appendix. While constrained by GPU resource limitations, preventing longer training duration, we anticipate consistent outperformance of Bidirectional Mamba with extended training duration.", + "bbox": [ + 212, + 167, + 787, + 364 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "COCO dataset. To further compare the performance of our method, we also evaluate it on the more complex and common dataset MS COCO. We compare with the Bidirection Mamba as the baseline in Table 4. It should be noted that all methods share nearly identical parameter numbers for fair comparison. We trained all methods using 16 A100 GPUs. please check Appendix 8.8 for details. As depicted in Table 4, our Zigzag-8 method outperforms Bidirectional Mamba as well as Zigzag-1. This suggests that amortizing various scanning schemes can yield significant improvements, attributed to better incorporation of the inductive bias for 2D images in Mamba.", + "bbox": [ + 212, + 364, + 787, + 501 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "UCF101 dataset. In Table 6, we present our results on the UCF101 dataset, training all methods using 4 A100 GPUs, with further scalability exploration conducted using 16 A100 GPUs. We mainly compare our method consistently with Vision Mamba [123]. For the choice of the 3D Zigzag Mamba, please refer to Appendix 8.8. For Factorized 3D Zigzag Mamba in video processing, we deploy the sst scheme for factorizing spatial and temporal modeling. This scheme prioritizes spatial information complexity over temporal information, hypothesizing that redundancy exists in the temporal domain. Our results consistently demonstrate the superior performance of our method across various scenarios, underscoring the intricacy and effectiveness of our approach.", + "bbox": [ + 212, + 501, + 787, + 654 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 215, + 674, + 359, + 690 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In this paper, we present the Zigzag Mamba Diffusion Model, developed within the Stochastic Interpolant framework. Our initial focus is on addressing the critical issue of spatial continuity. We then devise a Zigzag Mamba block with heterogeneous layerwise scan to better utilize the inductive bias in 2D images. Further, we factorize the 3D Mamba into 2D and 1D Zigzag Mamba to facilitate optimization. We empirically design various ablation studies to examine different factors. This approach allows for a more in-depth exploration of the Stochastic Interpolant theory. We hope our endeavor can inspire further exploration in the Mamba network design.", + "bbox": [ + 212, + 703, + 787, + 840 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 126 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 217, + 143, + 401, + 162 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "This project has been supported by the German Federal Ministry for Economic Affairs and Climate Action within the project \"NXT GEN AI METHODS - Generative Methoden für Perzeption, Prädiktion und Planung\", the bidt project KLIMA-MEMES, Bayer AG, and the German Research Foundation (DFG) project 421703927. The authors gratefully acknowledge the Gauss Center for Supercomputing for providing compute through the NIC on JUWELS at JSC and the HPC resources supplied by the Erlangen National High Performance Computing Center (NHR@FAU funded by DFG).", + "bbox": [ + 212, + 176, + 787, + 297 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 217, + 321, + 321, + 337 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "1. Agarwal, N., Suo, D., Chen, X., Hazan, E.: Spectral state space models. arXiv (2023) 28", + "2. Ahamed, M.A., Cheng, Q.: Mambatab: A simple yet effective approach for handling tabular data. arXiv (2024) 3, 28", + "3. Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E.: Stochastic interpolants: A unifying framework for flows and diffusions. arXiv (2023) 2, 4, 9, 10", + "4. Albergo, M.S., Vanden-Eijnden, E.: Building normalizing flows with stochastic interpolants. arXiv (2022) 2", + "5. Ali, A., Zimerman, I., Wolf, L.: The hidden attention of mamba models. arXiv (2024) 28", + "6. Anderson, B.D.: Reverse-time diffusion equation models. Stochastic Processes and their Applications (1982) 9", + "7. Anthony, Q., Tokpanov, Y., Glorioso, P., Millidge, B.: Blackmamba: Mixture of experts for state-space models. arXiv (2024) 28", + "8. Ao, S., Zhao, W., Han, X., Yang, C., Liu, Z., Shi, C., Sun, M., Wang, S., Su, T.: Burstattention: An efficient distributed attention framework for extremely long sequences. arXiv (2024) 2", + "9. Bao, F., Li, C., Cao, Y., Zhu, J.: All are worth words: a vit backbone for score-based diffusion models. CVPR (2023) 1, 3, 5, 12, 23", + "10. Bao, F., Nie, S., Xue, K., Li, C., Pu, S., Wang, Y., Yue, G., Cao, Y., Su, H., Zhu, J.: One transformer fits all distributions in multi-modal diffusion at scale. arXiv (2023) 1, 3, 6", + "11. Beck, M., Poppel, K., Spanring, M., Auer, A., Prudnikova, O., Kopp, M., Klambauer, G., Brandstetter, J., Hochreiter, S.: xlstm: Extended long short-term memory (2024) 22", + "12. Behrouz, A., Hashemi, F.: Graph mamba: Towards learning on graphs with state space models. arXiv (2024) 3, 28", + "13. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv (2020) 1, 3", + "14. Ben-Hamu, H., Cohen, S., Bose, J., Amos, B., Grover, A., Nickel, M., Chen, R.T., Lipman, Y.: Matching normalizing flows and probability paths on manifolds. In: ICML (2022) 4", + "15. Brandon, W., Nrusimha, A., Qian, K., Ankner, Z., Jin, T., Song, Z., Ragan-Kelley, J.: Striped attention: Faster ring attention for causal transformers. arXiv preprint arXiv:2311.09431 (2023) 2" + ], + "bbox": [ + 225, + 353, + 785, + 839 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 767, + 116, + 784, + 126 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "16. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: CVPR (2021) 8", + "17. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. NeurIPS (2018) 2", + "18. Chen, S., Xu, M., Ren, J., Cong, Y., He, S., Xie, Y., Sinha, A., Luo, P., Xiang, T., Perez-Rua, J.M.: Gentron: Delving deep into diffusion transformers for image and video generation. arXiv (2023) 3, 6", + "19. Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv (2019) 1", + "20. Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al.: Rethinking attention with performers. arXiv (2020) 2", + "21. Crowson, K., Baumann, S.A., Birch, A., Abraham, T.M., Kaplan, D.Z., Shippole, E.: Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers. arXiv (2024) 29", + "22. Dao, Q., Phung, H., Nguyen, B., Tran, A.: Flow matching in latent space. arXiv (2023) 4", + "23. Dao, T., Fu, D., Ermon, S., Rudra, A., Ré, C.: Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS (2022) 2, 3", + "24. Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A.P., Caron, M., Geirhos, R., Alabdulmohsin, I., et al.: Scaling vision transformers to 22 billion parameters. In: ICML (2023) 3", + "25. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021) 23, 27", + "26. Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: CVPR (2021) 10", + "27. Fei, Z., Fan, M., Yu, C., Huang, J.: Scalable diffusion models with state space backbone. arXiv (2024) 3, 4, 28", + "28. Fischer, J.S., Gui, M., Ma, P., Stracke, N., Baumann, S.A., Ommer, B.: Boosting latent diffusion with flow matching. ECCV (2024) 4, 10", + "29. Fu, D.Y., Dao, T., Saab, K.K., Thomas, A.W., Rudra, A., Ré, C.: Hungry hungry hippos: Towards language modeling with state space models. arXiv (2022) 2", + "30. Fuest, M., Ma, P., Gui, M., Fischer, J.S., Hu, V.T., Ommer, B.: Diffusion models and representation learning: A survey. arXiv preprint arXiv:2407.00783 (2024) 1", + "31. Gong, H., Kang, L., Wang, Y., Wan, X., Li, H.: nnmamba: 3d biomedical image segmentation, classification and landmark detection with state space model. arXiv (2024) 28", + "32. Gong, J., Foo, L.G., Fan, Z., Ke, Q., Rahmani, H., Liu, J.: Diffpose: Toward more reliable 3d pose estimation. In: CVPR (2023) 1", + "33. Gu, A., Dao, T.: Mamba: Linear-time sequence modeling with selective state spaces. CoLM (2024) 2, 3, 4, 5", + "34. Gu, A., Goel, K., Gupta, A., Ré, C.: On the parameterization and initialization of diagonal state space models. NeurIPS (2022) 2, 4, 5", + "35. Gu, A., Goel, K., Ré, C.: Efficiently modeling long sequences with structured state spaces (2021) 2, 4, 5", + "36. Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., Ré, C.: Combining recurrent, convolutional, and continuous-time models with linear state space layers. NeurIPS (2021) 2, 5" + ], + "bbox": [ + 223, + 146, + 784, + 839 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 126 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "37. Gui, M., Fischer, J.S., Prestel, U., Ma, P., Kotovenko, D., Grebenkova, O., Baumann, S.A., Hu, V.T., Ommer, B.: Depthfm: Fast monocular depth estimation with flow matching. arXiv preprint arXiv:2403.13788 (2024) 4", + "38. Guo, H., Li, J., Dai, T., Ouyang, Z., Ren, X., Xia, S.T.: Mambair: A simple baseline for image restoration with state-space model. arXiv (2024) 3, 28", + "39. Gupta, A., Gu, A., Berant, J.: Diagonal state spaces are as effective as structured state spaces. NeurIPS (2022) 2, 4, 5", + "40. He, W., Han, K., Tang, Y., Wang, C., Yang, Y., Guo, T., Wang, Y.: Densemamba: State space models with dense hidden connection for efficient large language models. arXiv (2024) 28", + "41. He, X., Cao, K., Yan, K., Li, R., Xie, C., Zhang, J., Zhou, M.: Pan-mamba: Effective pan-sharpening with state space model. arXiv (2024) 28", + "42. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv (2022) 8", + "43. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: NeurIPS (2020) 2, 3, 4", + "44. Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., Fleet, D.J.: Video diffusion models. In: ARXIV (2022) 1", + "45. Hu, V.T., Chen, Y., Caron, M., Asano, Y.M., Snoek, C.G., Ommer, B.: Guided diffusion from self-supervised diffusion features. In: ARXIV (2023) 1", + "46. Hu, V.T., Wu, D., Asano, Y., Mettes, P., Fernando, B., Ommer, B., Snoek, C.: Flow matching for conditional text generation in a few sampling steps pp. 380-392 (2024) 4", + "47. Hu, V.T., Yin, W., Ma, P., Chen, Y., Fernando, B., Asano, Y.M., Gavves, E., Mettes, P., Ommer, B., Snoek, C.G.: Motion flow matching for human motion synthesis and editing. In: ARXIV (2023) 4", + "48. Hu, V.T., Zhang, D.W., Asano, Y.M., Burghouts, G.J., Snoek, C.G.M.: Self-guided diffusion models. In: CVPR (2023) 1", + "49. Hu, V.T., Zhang, D.W., Mettes, P., Tang, M., Zhao, D., Snoek, C.G.: Latent space editing in transformer-based flow matching. In: ICML 2023 Workshop, New Frontiers in Learning, Control, and Dynamical Systems (2023) 4", + "50. Huang, Z., Zhou, P., Yan, S., Lin, L.: Scalelong: Towards more stable training of diffusion model via scaling network long skip connection. NeurIPS (2024) 1", + "51. Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., Fu, B.: Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650 (2021) 29", + "52. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. In: NeurIPS (2022) 4", + "53. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019) 10", + "54. Kingma, D., Salimans, T., Poole, B., Ho, J.: Variational diffusion models. In: NeurIPS (2021) 10", + "55. Kingma, D.P., Gao, R.: Understanding the diffusion objective as a weighted integral of ellb. arXiv (2023) 10", + "56. Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: The efficient transformer. arXiv (2020) 1", + "57. Lee, S., Kim, B., Ye, J.C.: Minimizing trajectory curvature of ode-based generative models. ICML (2023) 4", + "58. Li, K., Li, X., Wang, Y., He, Y., Wang, Y., Wang, L., Qiao, Y.: Videomamba: State space model for efficient video understanding. ECCV (2024) 3" + ], + "bbox": [ + 225, + 146, + 784, + 839 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "59. Li, S., Singh, H., Grover, A.: Mamba-nd: Selective state space modeling for multidimensional data. arXiv (2024) 3, 28, 29", + "60. Li, Y., Bornschein, J., Chen, T.: Denoising autoregressive representation learning. arXiv preprint arXiv:2403.05196 (2024) 29", + "61. Liang, D., Zhou, X., Wang, X., Zhu, X., Xu, W., Zou, Z., Ye, X., Bai, X.: Pointmamba: A simple state space model for point cloud analysis. arXiv preprint arXiv:2402.10739 (2024) 3, 27, 28", + "62. Lin, B., Jiang, W., Chen, P., Zhang, Y., Liu, S., Chen, Y.C.: Mtmamba: Enhancing multi-task dense scene understanding by mamba-based decoders. ECCV (2024) 3", + "63. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) 30", + "64. Lipman, Y., Chen, R.T., Ben-Hamu, H., Nickel, M., Le, M.: Flow matching for generative modeling. ICLR (2023) 2, 4", + "65. Liu, G.H., Chen, T., So, O., Theodorou, E.: Deep generalized schrödinger bridge. NeurIPS (2022) 2", + "66. Liu, H., Zaharia, M., Abbeel, P.: Ring attention with blockwise transformers for near-infinite context. arXiv (2023) 2", + "67. Liu, J., Yang, H., Zhou, H.Y., Xi, Y., Yu, L., Yu, Y., Liang, Y., Shi, G., Zhang, S., Zheng, H., et al.: Swin-umamba: Mamba-based unet withImagenet-based pretraining. arXiv (2024) 2, 6, 7", + "68. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv (2022) 4", + "69. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. ICLR (2023) 2", + "70. Liu, Y., Tian, Y., Zhao, Y., Yu, H., Xie, L., Wang, Y., Ye, Q., Liu, Y.: Vmamba: Visual state space model. arXiv (2024) 2, 3, 5, 6, 7, 13, 14, 28, 29", + "71. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV (2021) 1", + "72. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2019) 11", + "73. Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv (2024) 2, 3, 28", + "74. Ma, N., Goldstein, M., Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E., Xie, S.: Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. arXiv (2024) 2, 4", + "75. McKenna, D.M.: Hilbert curves: Outside-in and inside-gone. Mathemaesthetics, Inc (2019) 7, 26", + "76. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: ECCV (2016) 6", + "77. Nguyen, E., Goel, K., Gu, A., Downs, G., Shah, P., Dao, T., Baccus, S., Ré, C.: S4nd: Modeling images and videos as multidimensional signals with state spaces. NeurIPS (2022) 3, 28, 29", + "78. OpenAI: Sora: Creating video from text (2024), https://openai.com/sora 1, 6", + "79. Park, J., Kim, H.S., Ko, K., Kim, M., Kim, C.: Videomamba: Spatio-temporal selective state space model. ECCV (2024) 3, 12", + "80. Peebles, W., Xie, S.: Scalable diffusion models with transformers. arXiv (2022) 1, 3, 5, 12, 23" + ], + "bbox": [ + 223, + 146, + 784, + 839 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 126 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "81. Peng, B., Goldstein, D., Anthony, Q., Albalak, A., Alcaide, E., Biderman, S., Cheah, E., Ferdinan, T., Hou, H., Kazienko, P., et al.: Eagle and finch: Rwkv with matrix-valued states and dynamic recurrence. arXiv preprint arXiv:2404.05892 (2024) 22", + "82. Qin, Z., Yang, S., Sun, W., Shen, X., Li, D., Sun, W., Zhong, Y.: Hgrn2: Gated linear rnns with state expansion. arXiv preprint arXiv:2404.07904 (2024) 22", + "83. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021) 30", + "84. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022) 1, 3, 30", + "85. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI (2015) 6", + "86. Ruan, J., Xiang, S.: Vm-unet: Vision mamba unet for medical image segmentation. arXiv (2024) 3, 28", + "87. Skorokhodov, I., Sotnikov, G., Elhoseiny, M.: Aligning latent and image spaces to connect the unconnectable. In: ICCV (2021) 34", + "88. Smith, J.T., Warrington, A., Linderman, S.W.: Simplified state space layers for sequence modeling. arXiv (2022) 2", + "89. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: ICML (2015) 2", + "90. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. arXiv (2019) 4", + "91. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: ICLR (2021) 2, 4, 9, 10", + "92. Stein, G., Cresswell, J., Hosseinzadeh, R., Sui, Y., Ross, B., Villecloze, V., Liu, Z., Caterini, A.L., Taylor, E., Loaiza-Ganem, G.: Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. NeurIPS (2023) 29", + "93. Sun, Z., Yang, Y., Yoo, S.: Sparse attention with learning to hash. In: ICLR (2021) 2", + "94. Tang, R., Liu, L., Pandey, A., Jiang, Z., Yang, G., Kumar, K., Stenetorp, P., Lin, J., Ture, F.: What the daam: Interpreting stable diffusion using cross attention. arXiv (2022) 8", + "95. Tikochinski, R., Goldstein, A., Meiri, Y., Hasson, U., Reichart, R.: An incremental large language model for long text processing in the brain (2024) 2", + "96. Tong, A., Malkin, N., Fatras, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., Bengio, Y.: Simulation-free schr\\'' odinger bridges via score and flow matching. arXiv (2023) 9", + "97. Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., Gelly, S.: Fvd: A new metric for video generation. ICLR Workshop (2019) 30", + "98. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 27", + "99. Wang, C., Tsepa, O., Ma, J., Wang, B.: Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv (2024) 28", + "00. Wang, J., Gangavarapu, T., Yan, J.N., Rush, A.M.: Mambabyte: Token-free selective state space model. arXiv (2024) 3, 28", + "01. Wang, J., Yan, J.N., Gu, A., Rush, A.M.: Pretraining without attention. arXiv (2022) 6" + ], + "bbox": [ + 225, + 146, + 785, + 839 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 127 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 767, + 114, + 785, + 126 + ], + "page_idx": 18 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "102. Wang, S., Li, Q.: Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization. arXiv (2023) 2, 28", + "103. Wang, S., Xue, B.: State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. NeurIPS (2024) 2, 28", + "104. Wang, W., Ma, S., Xu, H., Usuyama, N., Ding, J., Poon, H., Wei, F.: When an image is worth 1,024 x 1,024 words: A case study in computational pathology. arXiv (2023) 3", + "105. Wang, X., Wang, S., Ding, Y., Li, Y., Wu, W., Rong, Y., Kong, W., Huang, J., Li, S., Yang, H., Wang, Z., Jiang, B., Li, C., Wang, Y., Tian, Y., Tang, J.: State space model for new-generation network alternative to transformers: A survey (2024) 3", + "106. Wang, X., Kang, Z., Mu, Y.: Text-controlled motion mamba: Text-instructed temporal grounding of human motion. arXiv preprint arXiv:2404.11375 (2024) 3", + "107. Wang, Z., Ma, C.: Semi-mamba-unet: Pixel-level contrastive cross-supervised visual mamba-based unet for semi-supervised medical image segmentation. arXiv (2024) 28", + "108. Wang, Z., Zheng, J.Q., Zhang, Y., Cui, G., Li, L.: Mamba-unet: Unet-like pure visual mamba for medical image segmentation. arXiv (2024) 3, 28", + "109. Wu, L., Wang, D., Gong, C., Liu, X., Xiong, Y., Ranjan, R., Krishnamoorthi, R., Chandra, V., Liu, Q.: Fast point cloud generation with straight flows. In: CVPR (2023) 1", + "110. Xia, W., Yang, Y., Xue, J.H., Wu, B.: Tedigan: Text-guided diverse face image generation and manipulation. In: CVPR (2021) 10, 30", + "111. Xing, Z., Ye, T., Yang, Y., Liu, G., Zhu, L.: Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. arXiv (2024) 3, 28", + "112. Yan, J.N., Gu, J., Rush, A.M.: Diffusion models without attention. arXiv (2023) 4, 6", + "113. Yang, S., Wang, B., Shen, Y., Panda, R., Kim, Y.: Gated linear attention transformers with hardware-efficient training. ICML (2024) 22", + "114. Yang, S., Zhang, Y.: Fla: A triton-based library for hardware-efficient implementations of linear attention mechanism (Jan 2024), https://github.com/sustcsonglin/flashlinear-attention_22", + "115. Yang, Y., Xing, Z., Zhu, L.: Vivim: a video vision mamba for medical video object segmentation. arXiv (2024) 6", + "116. Yu, A., Nigmatov, A., Morozov, D., Mahoney, M.W., Erichson, N.B.: Robustifying state-space models for long sequences via approximate diagonalization. arXiv (2023) 2", + "117. Yu, S., Sohn, K., Kim, S., Shin, J.: Video probabilistic diffusion models in projected latent space. In: CVPR (2023) 30", + "118. Zhang, T., Li, X., Yuan, H., Ji, S., Yan, S.: Point could mamba: Point cloud learning via state space model. arXiv (2024) 28", + "119. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: CVPR (2018) 29", + "120. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. ECCV (2024) 3", + "121. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. arXiv (2024) 28", + "122. Zheng, Z., Wu, C.: U-shaped vision mamba for single image dehazing. arXiv (2024) 3, 28" + ], + "bbox": [ + 217, + 146, + 784, + 839 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 217, + 114, + 235, + 126 + ], + "page_idx": 19 + }, + { + "type": "header", + "text": "Hu et al.", + "bbox": [ + 271, + 114, + 331, + 126 + ], + "page_idx": 19 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "123. Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., Wang, X.: Vision mamba: Efficient visual representation learning with bidirectional state space model. ICML (2024) 2, 3, 5, 7, 11, 13, 14, 28", + "124. zhuzilin: Ring flash attention. https://github.com/zhuzilin/ring-flash-attention (2024) 2" + ], + "bbox": [ + 215, + 146, + 787, + 214 + ], + "page_idx": 20 + }, + { + "type": "header", + "text": "ZigMa", + "bbox": [ + 684, + 114, + 730, + 128 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 767, + 114, + 782, + 126 + ], + "page_idx": 20 + } +] \ No newline at end of file diff --git a/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_model.json b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fa40e3643d06b88e2766ba4a0983f4a1d5fedbb8 --- /dev/null +++ b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_model.json @@ -0,0 +1,3410 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.221, + 0.145, + 0.273, + 0.183 + ], + "angle": 0, + "content": "#" + }, + { + "type": "header", + "bbox": [ + 0.274, + 0.16, + 0.783, + 0.202 + ], + "angle": 0, + "content": "ZigMa: A DiT-style Zigzag Mamba Diffusion Model" + }, + { + "type": "text", + "bbox": [ + 0.245, + 0.232, + 0.758, + 0.263 + ], + "angle": 0, + "content": "Vincent Tao Hu, Stefan Andreas Baumann, Ming Gui, Olga Grebenkova, Pingchuan Ma, Johannes Fischer, and Björn Ommer" + }, + { + "type": "text", + "bbox": [ + 0.384, + 0.274, + 0.618, + 0.303 + ], + "angle": 0, + "content": "CompVis @ LMU Munich, MCML https://compvis.github.io/zigma/" + }, + { + "type": "text", + "bbox": [ + 0.262, + 0.346, + 0.744, + 0.568 + ], + "angle": 0, + "content": "Abstract The diffusion model has long been plagued by scalability and quadratic complexity issues, especially within transformer-based structures. In this study, we aim to leverage the long sequence modeling capability of a State-Space Model called Mamba to extend its applicability to visual data generation. Firstly, we identify a critical oversight in most current Mamba-based vision methods, namely the lack of consideration for spatial continuity in the scan scheme of Mamba. Secondly, building upon this insight, we introduce Zigzag Mamba, a simple, plug-and-play, minimal-parameter burden, DiT style solution, which outperforms Mamba-based baselines and demonstrates improved speed and memory utilization compared to transformer-based baselines, also this heterogeneous layerwise scan enables zero memory and speed burden when we consider more scan paths. Lastly, we integrate Zigzag Mamba with the Stochastic Interpolant framework to investigate the scalability of the model on large-resolution visual datasets, such as FacesHQ \\(1024 \\times 1024\\) and UCF101, MultiModal-CelebA-HQ, and MS COCO \\(256 \\times 256\\)." + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.581, + 0.741, + 0.61 + ], + "angle": 0, + "content": "Keywords: Diffusion Model \\(\\cdot\\) State-Space Model \\(\\cdot\\) Stochastic Interpolants" + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.639, + 0.376, + 0.654 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.674, + 0.788, + 0.844 + ], + "angle": 0, + "content": "Diffusion models have demonstrated significant advancements across various applications, including image processing [45, 48, 84], video analysis [44], point cloud processing [109], representation learning [30] and human pose estimation [32]. Many of these models are built upon Latent Diffusion Models (LDM) [84], which are typically based on the UNet backbone. However, scalability remains a significant challenge in LDMs [50]. Recently, transformer-based structures have gained popularity due to their scalability [9, 80] and effectiveness in multi-modal training [10]. Notably, the transformer-based structure DiT [80] has even contributed to enhancing the high-fidelity video generation model SORA [78] by OpenAI. Despite efforts to alleviate the quadratic complexity of the attention mechanism through techniques such as windowing [71], sliding [13], sparsification [19, 56]," + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "2" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.128 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.146, + 0.784, + 0.176 + ], + "angle": 0, + "content": "- hashing [20, 93], Ring Attention [15, 66], Flash Attention [23] or a combination of them [8, 124], it remains a bottleneck for diffusion models." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.178, + 0.788, + 0.449 + ], + "angle": 0, + "content": "On the other hand, State-Space Models [34, 35, 39] have demonstrated significant potential for long sequence modeling, rivaling transformer-based methods. Their biological similarity [95] and efficient memory state also advocate for the use of the State-Space model over the transformer. Several methods [29, 33, 35, 88] have been proposed to enhance the robustness [116], scalability [33], and efficiency [35, 36] of State-Space Models. Among these, a method called Mamba [33] aims to alleviate these issues through work-efficient parallel scanning and other data-dependent innovations. However, the advantage of Mamba lies in 1D sequence modeling, and extending it to 2D images is a challenging question. Previous works [70, 123] have proposed flattening 2D tokens directly by computer hierarchy such as row-and-column-major order, but this approach neglects Spatial Continuity, as shown in Figure 1. Other works [67, 73] consider various directions in a single Mamba block, but this introduces additional parameters and GPU memory burden. In this paper, we aim to emphasize the importance of Spatial Continuity in Mamba and propose several intuitive and simple methods to enable the application of Mamba blocks to 2D images by incorporating continuity-based inductive biases in images. We also generalize these methods to 3D with spatial-temporal factorization on 3D sequence." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.45, + 0.788, + 0.54 + ], + "angle": 0, + "content": "In the end, Stochastic Interpolant [3] provides a more generalized framework that can uniform various generative models including, Normalizing Flow [17], diffusion model [43,89,91], Flow matching [4,64,69], and Schrödinger Bridge [65]. Previously, some works [74] explore the Stochastic Interpolant on relatively small resolutions, e.g., \\(256 \\times 256\\), \\(512 \\times 512\\). In this work, we aim to explore it in further more complex scenarios e.g., \\(1024 \\times 1024\\) resolution and even in videos." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.54, + 0.788, + 0.722 + ], + "angle": 0, + "content": "In summary, our contributions are as follows: Firstly, we identify the critical issue of Spatial Continuity in generalizing the Mamba block from 1D sequence modeling to 2D image and 3D video modeling. Building on this insight, we propose a simple, plug-and-play, zero-parameter heterogeneous layerwise scan paradigm named Zigzag Mamba (ZigMa) that leverages spatial continuity to maximally incorporate the inductive bias from visual data. Secondly, we extend the methodology from 2D to 3D by factorizing the spatial and temporal sequences to optimize performance. Secondly, we provide comprehensive analysis surrounding the Mamba block within the regime of diffusion models. Lastly, we demonstrate that our designed Zigzag Mamba outperforms related Mamba-based baselines, representing the first exploration of Stochastic Interpolants on large-scale image data \\((1024\\times 1024)\\) and videos." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.746, + 0.398, + 0.763 + ], + "angle": 0, + "content": "2 Related Works" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.794, + 0.842 + ], + "angle": 0, + "content": "Mamba. Several works [102, 103, 103] have demonstrated that the State-Space Model possesses universal approximation ability under certain conditions. Mamba, as a new State-Space Model, has superior potential for modeling long sequences efficiently, which has been explored in various fields such as medical imag-" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.787, + 0.128 + ], + "angle": 0, + "content": "3" + }, + { + "type": "image", + "bbox": [ + 0.309, + 0.146, + 0.7, + 0.384 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.216, + 0.397, + 0.788, + 0.425 + ], + "angle": 0, + "content": "Figure 1: Motivation. Our Zigzag Mamba method improves the network's position-awareness by arranging and rearranging the scan path of Mamba in a heuristic manner." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.46, + 0.788, + 0.732 + ], + "angle": 0, + "content": "ing [73, 86, 108, 111], video [58, 79], image restoration [38, 122], graphs [12], NLP word byte [100], tabular data [2], point clouds [61], human motion [106, 120], multi-task [62] and image generation [27]. Among them, the most related to us are VisionMamba [70, 123], S4ND [77] and Mamba-ND [59]. VisionMamba [70, 123] uses a bidirectional SSM in discriminative tasks which incurs a high computational cost. Our method applies a simple alternative mamba diffusion in generative models. S4ND [77] introduces local convolution into Mamba's reasoning process, moving beyond the use of only 1D data. Mamba-ND [59] takes multi-dimensionality into account in discriminative tasks, making use of various scans within a single block. In contrast, our focus is on distributing scan complexity across every layer of the network, thus maximizing the incorporation of inductive bias from visual data with zero parameter burden. Scan curve is an important direction in SSM, PointMamba [61] is a representative work that employs SSM with space curves (e.g., Hilbert) for point cloud analysis, achieving remarkable performance. In contrast with them, our preliminary results show that the Hilbert curve doesn't work well with our method (see Appendix), while our method can be regarded as the simplest Peano curve. For more information related to Mamba's work, please refer to the survey [105]." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.735, + 0.788, + 0.842 + ], + "angle": 0, + "content": "Backbones in Diffusion Models. Diffusion models primarily employ UNet-based [43, 84] and ViT-based [9, 80] backbones. While UNet is known for high memory demands [84], ViT benefits from scalability [18, 24] and multi-modal learning [10]. However, ViT's quadratic complexity limits visual token processing, prompting studies towards mitigating this issue [13, 23, 104]. Our work, inspired by Mamba [33], explores an SSM-based model as a generic diffusion backbone, retaining ViT's modality-agnostic and sequential modeling advantages." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "4" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.127 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.146, + 0.788, + 0.239 + ], + "angle": 0, + "content": "Concurrently, DiffSSM [112] concentrates on unconditional and class conditioning within the S4 model [35]. DIS [27] mainly explores the state-space model on a relatively small resolution, which is not the exact focus of our work. Our work significantly differs from theirs as it primarily focuses on the backbone design using the Mamba block and extends it to text conditioning. Furthermore, we apply our method to more complex visual data." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.242, + 0.79, + 0.575 + ], + "angle": 0, + "content": "SDE and ODE in Diffusion models. The realm of Score-based Generative Models encompasses significant contributions from foundational works such as Score Matching with Langevin Dynamics (SMLD) by Song et al. [90], and the advent of Diffusion Models with Denoising Score Matching (DDPMs) proposed by Ho et al. [43]. These methodologies operate within the framework of Stochastic Differential Equations (SDEs), a concept further refined in the research of Song et al. [91]. Recent research strides, as exemplified by Karras et al. [52] and Lee et al. [57], have showcased the efficacy of employing Ordinary Differential Equation (ODE) samplers for diffusion SDEs, offering significant reductions in sampling costs compared to traditional approaches that entail discretizing diffusion SDEs. Furthermore, within the domain of Flow Matching [64] and Rectified Flow [68], both SMLD and DDPMs emerge as specialized instances under distinct paths of the Probability Flow ODE framework [91], with broad applications in vision [22,28,49], depth [37], human motion [47], even language [46]. These models typically utilize velocity field parameterizations employing the linear interpolant, a concept that finds broader applications in the Stochastic Interpolant framework [3], with subsequent generalizations extending to manifold settings [14]. The SiT model [74] scrutinizes the interplay between interpolation methods in both sampling and training contexts, albeit in the context of smaller resolutions such as \\(512 \\times 512\\). Our research endeavors to extend these insights to a larger scale, focusing on the generalization capabilities for 2D images of \\(1024 \\times 1024\\) and 3D video data." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.62, + 0.331, + 0.636 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.674, + 0.789, + 0.842 + ], + "angle": 0, + "content": "In this section, we begin by providing background information on State-Space Models [34,35,39], with a particular focus on a special case known as Mamba [33]. We then highlight the critical issue of Spatial Continuity within the Mamba framework, and based on this insight, we propose the Zigzag Mamba. This enhancement aims to improve the efficiency of 2D data modeling by incorporating the continuity inductive bias inherent in 2D data. Furthermore, we design a basic cross-attention block upon Mamba block to achieve text-conditioning. Subsequently, we suggest extending this approach to 3D video data by factorizing the model into spatial and temporal dimensions, thereby facilitating the modeling process. Finally, we introduce the theoretical aspects of stochastic interpolation for training and sampling, which underpin our network architecture." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "5" + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.147, + 0.533, + 0.162 + ], + "angle": 0, + "content": "3.1 Background: State-Space Models" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.172, + 0.784, + 0.217 + ], + "angle": 0, + "content": "State Space Models (SSMs) [34, 35, 39] have been proven to handle long-range dependencies theoretically and empirically [36] with linear scaling w.r.t sequence length. In their general form, a linear state space model can be written as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.397, + 0.227, + 0.597, + 0.244 + ], + "angle": 0, + "content": "\\[\nx ^ {\\prime} (t) = \\mathbf {A} (t) x (t) + \\mathbf {B} (t) u (t)\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.405, + 0.246, + 0.603, + 0.264 + ], + "angle": 0, + "content": "\\[\ny (t) = \\mathbf {C} (t) x (t) + \\mathbf {D} (t) u (t),\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.275, + 0.785, + 0.35 + ], + "angle": 0, + "content": "mapping a 1-D input sequence \\( u(t) \\in \\mathbb{R} \\) to a 1-D output sequence \\( y(t) \\in \\mathbb{R} \\) through an implicit N-D latent state sequence \\( x(t) \\in \\mathbb{R}^n \\). Concretely, deep SSMs seek to use stacks of this simple model in a neural sequence modeling architecture, where the parameters \\( \\mathbf{A}, \\mathbf{B}, \\mathbf{C} \\) and \\( \\mathbf{D} \\) for each layer can be learned via gradient descent." + }, + { + "type": "image", + "bbox": [ + 0.273, + 0.387, + 0.726, + 0.48 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.498, + 0.784, + 0.581 + ], + "angle": 0, + "content": "Figure 2: ZigMa. Our backbone is structured in L layers, mirroring the style of DiT [80]. We use the single-scan Mamba block as the primary reasoning module across different patches. To ensure the network is positionally aware, we've designed an arrange-rearrange scheme based on the single-scan Mamba. Different layers follow pairs of unique rearrange operation \\(\\Omega\\) and reverse rearrange \\(\\bar{\\Omega}\\), optimizing the position-awareness of the method." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.612, + 0.785, + 0.763 + ], + "angle": 0, + "content": "Recently, Mamba [33] largely improved the flexibility of SSMs in Language Modelling by relaxing the time-invariance constraint on SSM parameters, while maintaining computational efficiency. Several studies [70, 123] have been conducted to adapt the use of Mamba from unidimensional language data to multidimensional visual data. While most of these studies try to duplicate the A to facilitate the new (reversed) direction, this approach can lead to additional parameters and an increased memory burden. In this paper, we focus on exploring the scanning scheme of Mamba in diffusion models to efficiently maximize the use of inductive-bias from multi-dimensional visual data with zero parameter and memory burden." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.785, + 0.553, + 0.801 + ], + "angle": 0, + "content": "3.2 Diffusion Backbone: Zigzag Mamba" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.784, + 0.84 + ], + "angle": 0, + "content": "DiT-Style Network. We opt to use the framework of DiT by AdaLN [80] rather than the skip-layer focused U-ViT structure [9], as DiT has been validated as a" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "6" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.128 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.237 + ], + "angle": 0, + "content": "scalable structure in literature [10, 18, 78]. Additionally, the Hourglass structure with downsampling [76, 85] requires selecting the depth and width based on the complexity of the dataset and task. This requirement limits the flexibility of the solution. Considering the aforementioned points, it informs our Mamba network design depicted in Figure 4. The core component of this design is the Zigzag Scanning, which will be explained in the following paragraph." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.238, + 0.788, + 0.388 + ], + "angle": 0, + "content": "Zigzag Scanning in Mamba. Previous studies [101, 112] have used bidirectional scanning within the SSM framework. This approach has been expanded to include additional scanning directions [67, 70, 115] to account for the characteristics of 2D image data. These approaches unfold image patches along four directions, resulting in four distinct sequences. Each of these sequences is subsequently processed together through every SSM. However, since each direction may have different SSM parameters (A, B, C, and D), scaling up the number of directions could potentially lead to memory issues. In this work, we investigate the potential for amortizing the complexity of the Mamba into each layer of the network." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.389, + 0.787, + 0.449 + ], + "angle": 0, + "content": "Our approach centers around the concept of token rearrangement before feeding them into the Forward Scan block. For a given input feature \\(\\mathbf{z}_i\\) from layer \\(i\\), the output feature \\(\\mathbf{z}_{i + 1}\\) of the Forward Scan block after the rearrangement can be expressed as:" + }, + { + "type": "equation", + "bbox": [ + 0.42, + 0.457, + 0.786, + 0.473 + ], + "angle": 0, + "content": "\\[\n\\mathbf {z} _ {\\Omega_ {i}} = \\operatorname {a r r a n g e} \\left(\\mathbf {z} _ {i}, \\Omega_ {i}\\right), \\tag {1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.42, + 0.476, + 0.786, + 0.492 + ], + "angle": 0, + "content": "\\[\n\\bar {\\mathbf {z}} _ {\\Omega_ {i}} = \\operatorname {s c a n} \\left(\\mathbf {z} _ {\\Omega_ {i}}\\right), \\tag {2}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.415, + 0.494, + 0.786, + 0.511 + ], + "angle": 0, + "content": "\\[\n\\mathbf {z} _ {i + 1} = \\operatorname {a r r a n g e} \\left(\\bar {\\mathbf {z}} _ {\\Omega_ {i}}, \\bar {\\Omega} _ {i}\\right), \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.215, + 0.517, + 0.787, + 0.562 + ], + "angle": 0, + "content": "\\(\\varOmega_{i}\\) represents the 1D permutation of layer \\(i\\), which rearranges the order of the patch tokens by \\(\\varOmega_{i}\\), and \\(\\varOmega_{i}\\) and \\(\\overline{\\varOmega}_{i}\\) represent the reverse operation. This ensures that both \\(\\mathbf{z}_i\\) and \\(\\mathbf{z}_{i + 1}\\) maintain the sample order of the original image tokens." + }, + { + "type": "image", + "bbox": [ + 0.252, + 0.6, + 0.348, + 0.674 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.256, + 0.675, + 0.343, + 0.687 + ], + "angle": 0, + "content": "(a) sweep-scan" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.6, + 0.465, + 0.674 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.372, + 0.675, + 0.462, + 0.687 + ], + "angle": 0, + "content": "(b) zigzag-scan" + }, + { + "type": "image", + "bbox": [ + 0.476, + 0.585, + 0.543, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.55, + 0.586, + 0.616, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.623, + 0.586, + 0.69, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.699, + 0.586, + 0.764, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.53, + 0.689, + 0.71, + 0.701 + ], + "angle": 0, + "content": "(c) zigzag-scan with 8 schemes" + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.713, + 0.788, + 0.784 + ], + "angle": 0, + "content": "Figure 3: The 2D Image Scan. Our mamba scan design is based on the sweep-scan scheme shown in subfigure (a). From this, we developed a zigzag-scan scheme displayed in subfigure (b) to enhance the continuity of the patches, thereby maximizing the potential of the Mamba block. Since there are several possible arrangements for these continuous scans, we have listed the eight most common zigzag-scans in subfigure (c)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.81, + 0.787, + 0.842 + ], + "angle": 0, + "content": "Now we explore the design of the \\(\\Omega_{i}\\) operation, considering additional inductive biases from 2D images. We propose one key properties: Spatial Con" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.775, + 0.116, + 0.785, + 0.126 + ], + "angle": 0, + "content": "7" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.296 + ], + "angle": 0, + "content": "tinuity. Regarding Spatial Continuity, current innovations of Mamba in images [67, 70, 123] often squeeze 2D patch tokens directly following the computer hierarchy, such as row-and-column-major order. However, this approach may not be optimal for incorporating the inductive bias with neighboring tokens, as illustrated in Figure 3. To address this, we introduce a novel scanning scheme designed to maintain spatial continuity during the scan process. Additionally, we consider space-filling, which entails that for a patch of size \\( N \\times N \\), the length of the 1D continuous scanning scheme should be \\( N^2 \\). This helps to efficiently incorporate tokens to maximize the potential of long sequence modeling within the Mamba block." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.298, + 0.785, + 0.388 + ], + "angle": 0, + "content": "Heterogeneous Layerwise Scan. To achieve the aforementioned property, we heuristically design eight possible space-filling continuous schemes\\(^1\\), denoted as \\(\\mathbf{S}_j\\) (where \\(j \\in [0,7]\\)), as illustrated in Figure 3. While there may be other conceivable schemes, for simplicity, we limit our usage to these eight. Consequently, the scheme for each layer can be represented as \\(\\varOmega_{i} = \\mathbf{S}_{\\{i\\% 8\\}}\\), where \\(\\%\\) denotes the modulo operator." + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.413, + 0.688, + 0.619 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.63, + 0.784, + 0.685 + ], + "angle": 0, + "content": "Figure 4: The Detail of our Zigzag Mamba block. The detail of Mamba Scan is shown in Figure 2. The condition can include a timestep and a text prompt. These are fed into an MLP, which separately modulates the Mamba scan for long sequence modeling and cross-attention for multi-modal reasoning." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.715, + 0.784, + 0.775 + ], + "angle": 0, + "content": "Deploying text-condition on Zigzag Mamba. While Mamba offers the advantage of efficient long sequence modeling, it does so at the expense of the attention mechanism. As a result, there has been limited exploration into incorporating text-conditioning in Mamba-based diffusion models. To address this" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.784, + 0.784, + 0.839 + ], + "angle": 0, + "content": "1 We also experimented with more complex continuous space-filling paths, such as the Hilbert space-filling curve [75]. However, empirical findings indicate that this approach may lead to deteriorated results. For further detailed comparisons, please refer to the Appendix." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.23, + 0.127 + ], + "angle": 0, + "content": "8" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.128 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.784, + 0.223 + ], + "angle": 0, + "content": "gap, we propose a straightforward cross-attention block with skip layers built upon the Mamba block, as illustrated in Figure 4. This design not only enables long sequence modeling but also facilitates multi-token conditioning, such as text-conditioning. Furthermore, it has the potential to provide interpretability [16, 42, 94], as cross-attention has been utilized in diffusion models." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.224, + 0.785, + 0.345 + ], + "angle": 0, + "content": "Generalizing to 3D videos by factorizing spatial and temporal information. In previous sections, our focus has been on the spatial 2D Mamba, where we designed several spatially continuous, space-filling 2D scanning schemes. In this section, we aim to leverage this experience to aid in designing corresponding mechanisms for 3D video processing. We commence our design process by extrapolating from the conventional directional Mamba, as depicted in Figure 5. Given a video feature input \\(\\mathbf{z} \\in \\mathbb{R}^{B \\times T \\times C \\times W \\times H}\\), we propose three variants of the Video Mamba Block for facilitating 3D video generation." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.347, + 0.787, + 0.409 + ], + "angle": 0, + "content": "(a) Sweep-scan: In this approach, we directly flatten the 3D feature \\(\\mathbf{z}\\) without considering spatial or temporal continuity. It's worth noting that the flattening process follows the computer hierarchy order, meaning that no continuity is preserved in the flattened representation." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.41, + 0.787, + 0.485 + ], + "angle": 0, + "content": "(b) 3D Zigzag: Compared with the formulation of the 2D zigzag in previous subsections, we follow the similar design to generalize it to 3D Zigzag to keep the continuity in 2D and 3D simultaneously. Potentially, the scheme has much more complexity. We heuristically list 8 schemes as well. However, we empirically find that this scheme will lead to suboptimal optimization." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.487, + 0.787, + 0.593 + ], + "angle": 0, + "content": "(c) Factorized 3D Zigzag = 2D Zigzag + 1D Sweep: To address the suboptimal optimization issue, we propose to factorize the spatial and temporal correlations as separate Mamba blocks. The order of their application can be adjusted as desired, for example, \"sstt\" or \"ststst\", where \"s\" represents the spatial-zigzag Mamba and \"t\" represents the temporal-zigzag Mamba. For a 1D temporal sweep, we simply opt for forward and backward scanning, since there is only one dimension on the time axis." + }, + { + "type": "list", + "bbox": [ + 0.214, + 0.347, + 0.787, + 0.593 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.594, + 0.787, + 0.641 + ], + "angle": 0, + "content": "Computation Analysis. For a visual sequence \\(\\mathbf{T} \\in \\mathbb{R}^{1 \\times M \\times D}\\), the computation complexity of global self-attention and \\(k\\)-direction mamba and our zigzag mamba are as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.347, + 0.673, + 0.785, + 0.69 + ], + "angle": 0, + "content": "\\[\n\\zeta (\\text {s e l f - a t t e n t i o n}) = 4 \\mathrm {M D} ^ {2} + 2 \\mathrm {M} ^ {2} \\mathrm {D}, \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.347, + 0.694, + 0.785, + 0.711 + ], + "angle": 0, + "content": "\\[\n\\zeta (\\mathrm {k} - \\text {m a m b a}) = k \\times [ 3 \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} + \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} ^ {2} ], \\tag {5}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.347, + 0.715, + 0.785, + 0.732 + ], + "angle": 0, + "content": "\\[\n\\zeta (\\text {z i g z a g}) = 3 \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} + \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} ^ {2}, \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.75, + 0.792, + 0.841 + ], + "angle": 0, + "content": "where self-attention exhibits quadratic complexity with respect to sequence length M, while Mamba exhibits linear complexity (N is a fixed parameter, set to 16 by default). Here, \\( k \\) represents the number of scan directions in a single Mamba block. Therefore, \\( k \\)-mamba and zigzag share linear complexity with respect to self-attention. Moreover, our zigzag method can eliminate the \\( k \\) series, further reducing the overall complexity." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.776, + 0.117, + 0.786, + 0.127 + ], + "angle": 0, + "content": "9" + }, + { + "type": "image", + "bbox": [ + 0.296, + 0.147, + 0.7, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.313, + 0.788, + 0.41 + ], + "angle": 0, + "content": "Figure 5: The 3D Video Scan. (a) We illustrate the bidirectional Mamba with the sweep scan, where the spatial and temporal information is treated as a set of tokens with a computer-hierarchy order. (b) For the 3D zigzag-scan, we aim to maximize the potential of Mamba by employing a spatial continuous scan scheme and adopting the optimal zigzag scan solution, as depicted in Figure 3. (c) We further separate the reasoning between spatial and temporal information, resulting in a factorized combination of 2D spatial scan \\((\\varOmega)\\) plus a 1D temporal scan \\((\\varOmega^{\\prime})\\) scheme." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.44, + 0.785, + 0.485 + ], + "angle": 0, + "content": "Upon completing the design of the Zigzag Mamba network for improved visual inductive-bias integration, we proceed to combine it with a new diffusion framework, as illustrated below." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.509, + 0.627, + 0.525 + ], + "angle": 0, + "content": "3.3 Diffusion Framework: Stochastic Interpolant" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.534, + 0.785, + 0.58 + ], + "angle": 0, + "content": "Sampling based on vector \\(\\mathbf{v}\\) and score \\(\\mathbf{s}\\). Following [3, 96], the time-dependent probability distribution \\(p_t(\\mathbf{x})\\) of \\(\\mathbf{x}_t\\) also coincides with the distribution of the reverse-time SDE [6]:" + }, + { + "type": "equation", + "bbox": [ + 0.334, + 0.59, + 0.787, + 0.62 + ], + "angle": 0, + "content": "\\[\nd \\mathbf {X} _ {t} = \\mathbf {v} \\left(\\mathbf {X} _ {t}, t\\right) d t + \\frac {1}{2} w _ {t} \\mathbf {s} \\left(\\mathbf {X} _ {t}, t\\right) d t + \\sqrt {w _ {t}} d \\bar {\\mathbf {W}} _ {t}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.631, + 0.788, + 0.678 + ], + "angle": 0, + "content": "where \\(\\bar{\\mathbf{W}}_t\\) is a reverse-time Wiener process, \\(w_{t} > 0\\) is an arbitrary time-dependent diffusion coefficient, \\(\\mathbf{s}(\\mathbf{x},t) = \\nabla \\log p_t(\\mathbf{x})\\) is the score, and \\(\\mathbf{v}(\\mathbf{x},t)\\) is given by the conditional expectation" + }, + { + "type": "equation", + "bbox": [ + 0.35, + 0.688, + 0.786, + 0.724 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathbf {v} (\\mathbf {x}, t) = \\mathbb {E} [ \\dot {\\mathbf {x}} _ {t} | \\mathbf {x} _ {t} = \\mathbf {x} ], \\\\ \\begin{array}{l} \\underline {{- [ - t ] = - t}} \\\\ = \\dot {\\alpha} _ {t} \\mathbb {E} \\left[ \\mathbf {x} _ {*} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right] + \\dot {\\sigma} _ {t} \\mathbb {E} \\left[ \\boldsymbol {\\varepsilon} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right], \\end{array} \\tag {8} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.735, + 0.785, + 0.764 + ], + "angle": 0, + "content": "where \\(\\alpha_{t}\\) is a decreasing function of \\(t\\), and \\(\\sigma_{t}\\) is an increasing function of \\(t\\). Here, \\(\\dot{\\alpha}_{t}\\) and \\(\\dot{\\sigma}_{t}\\) denote the time derivatives of \\(\\alpha_{t}\\) and \\(\\sigma_{t}\\), respectively." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.766, + 0.788, + 0.842 + ], + "angle": 0, + "content": "As long as we can estimate the velocity \\(\\mathbf{v}(\\mathbf{x},t)\\) and/or score \\(\\mathbf{s}(\\mathbf{x},t)\\) fields, we can utilize it for the sampling process either by probability flow ODE [91] or the reverse-time SDE (7). Solving the reverse SDE (7) backwards in time from \\(\\mathbf{X}_T = \\varepsilon \\sim \\mathcal{N}(0,\\mathbf{I})\\) enables generating samples from the approximated data distribution \\(p_0(\\mathbf{x})\\sim p(\\mathbf{x})\\). During sampling, we can perform direct sampling" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "10" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.128 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.191 + ], + "angle": 0, + "content": "from either ODE or SDEs to balance between sampling speed and fidelity. If we choose to conduct ODE sampling, we can achieve this simply by setting the noise term \\(\\mathbf{s}\\) to zero." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.192, + 0.788, + 0.224 + ], + "angle": 0, + "content": "In [3], it shows that one of the two quantities \\(\\mathbf{s}_{\\theta}(\\mathbf{x},t)\\) and \\(\\mathbf{v}_{\\theta}(\\mathbf{x},t)\\) needs to be estimated in practice. This follows directly from the constraint" + }, + { + "type": "equation", + "bbox": [ + 0.37, + 0.235, + 0.786, + 0.271 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\mathbf {x} = \\mathbb {E} \\left[ \\mathbf {x} _ {t} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right], \\tag {9} \\\\ = \\alpha_ {t} \\mathbb {E} [ \\mathbf {x} _ {*} | \\mathbf {x} _ {t} = \\mathbf {x} ] + \\sigma_ {t} \\mathbb {E} [ \\varepsilon | \\mathbf {x} _ {t} = \\mathbf {x} ], \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.282, + 0.786, + 0.311 + ], + "angle": 0, + "content": "which can be used to re-express the score \\(\\mathbf{s}(\\mathbf{x},t)\\) in terms of the velocity \\(\\mathbf{v}(\\mathbf{x},t)\\) as" + }, + { + "type": "equation", + "bbox": [ + 0.393, + 0.322, + 0.786, + 0.354 + ], + "angle": 0, + "content": "\\[\n\\mathbf {s} (\\mathbf {x}, t) = \\sigma_ {t} ^ {- 1} \\frac {\\alpha_ {t} \\mathbf {v} (\\mathbf {x} , t) - \\dot {\\alpha} _ {t} \\mathbf {x}}{\\dot {\\alpha} _ {t} \\sigma_ {t} - \\alpha_ {t} \\dot {\\sigma} _ {t}}. \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.364, + 0.786, + 0.395 + ], + "angle": 0, + "content": "Thus, \\(\\mathbf{v}(\\mathbf{x},t)\\) and \\(\\mathbf{s}(\\mathbf{x},t)\\) can be mutually conversed. We illustrate how to compute them in the following." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.395, + 0.787, + 0.439 + ], + "angle": 0, + "content": "Estimating the score \\( \\mathbf{s} \\) and the velocity \\( \\mathbf{v} \\). It has been shown in score-based diffusion models [91] that the score can be estimated parametrically as \\( \\mathbf{s}_{\\theta}(\\mathbf{x},t) \\) using the loss" + }, + { + "type": "equation", + "bbox": [ + 0.372, + 0.44, + 0.786, + 0.475 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {s}} (\\theta) = \\int_ {0} ^ {T} \\mathbb {E} [ \\| \\sigma_ {t} \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t) + \\varepsilon \\| ^ {2} ] \\mathrm {d} t. \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.481, + 0.787, + 0.51 + ], + "angle": 0, + "content": "Similarly, the velocity \\(\\mathbf{v}(\\mathbf{x},t)\\) can be estimated parametrically as \\(\\mathbf{v}_{\\theta}(\\mathbf{x},t)\\) via the loss" + }, + { + "type": "equation", + "bbox": [ + 0.343, + 0.519, + 0.786, + 0.555 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} _ {\\mathrm {v}} (\\theta) = \\int_ {0} ^ {T} \\mathbb {E} [ \\| \\mathbf {v} _ {\\theta} (\\mathbf {x} _ {t}, t) - \\dot {\\alpha} _ {t} \\mathbf {x} _ {*} - \\dot {\\sigma} _ {t} \\boldsymbol {\\varepsilon} \\| ^ {2} ] \\mathrm {d} t, \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.565, + 0.787, + 0.61 + ], + "angle": 0, + "content": "where \\(\\theta\\) represents the Zigzag Mamba network that we described in the previous section, we adopt the linear path for training, due to its simplicity and relatively straight trajectory:" + }, + { + "type": "equation", + "bbox": [ + 0.429, + 0.611, + 0.786, + 0.627 + ], + "angle": 0, + "content": "\\[\n\\alpha_ {t} = 1 - t, \\quad \\sigma_ {t} = t. \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.635, + 0.787, + 0.697 + ], + "angle": 0, + "content": "We note that any time-dependent weight can be included under the integrals in both (11) and (12). These weight factors play a crucial role in score-based models when \\( T \\) becomes large [54, 55]. Thus, they provide a general form that considers both the time-dependent weight and the stochasticity." + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.72, + 0.368, + 0.738 + ], + "angle": 0, + "content": "4 Experiment" + }, + { + "type": "title", + "bbox": [ + 0.215, + 0.753, + 0.496, + 0.77 + ], + "angle": 0, + "content": "4.1 Dataset and Training Detail" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.787, + 0.843 + ], + "angle": 0, + "content": "Image Dataset. To explore the scalability in high resolution, we conduct experiments on the FacesHQ \\(1024 \\times 1024\\). The general dataset that we use for training and ablations is FacesHQ, a compilation of CelebA-HQ [110] and FFHQ [53], as employed in previous work such as [26, 28]." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.784, + 0.127 + ], + "angle": 0, + "content": "11" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.145, + 0.788, + 0.187 + ], + "angle": 0, + "content": "Table 1: Ablation of Scanning Scheme Number. We evaluate various zigzag scanning schemes. Starting from a simple \"Sweep\" baseline, we consistently observe improvements as more schemes are implemented." + }, + { + "type": "table", + "bbox": [ + 0.232, + 0.199, + 0.769, + 0.303 + ], + "angle": 0, + "content": "
MultiModal-CelebA-256MultiModal-CelebA-512
FID5k ↓FDD5k ↓KID5k ↓FID5k ↓FDD5k ↓KID5k ↓
Sweep158.175.90.169162.3103.20.203
Zigzag-165.747.80.051121.078.00.113
Zigzag-254.745.50.04196.059.50.079
Zigzag-845.526.40.01134.929.50.023
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.329, + 0.784, + 0.404 + ], + "angle": 0, + "content": "Video Dataset. UCF101 dataset consists of 13,320 video clips, which are classified into 101 categories. The total length of these video clips is over 27 hours. All these videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of \\(320 \\times 240\\). We randomly sample continuous 16 frames and resize the frames to \\(256 \\times 256\\)." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.405, + 0.785, + 0.526 + ], + "angle": 0, + "content": "Training Details. We uniformly use AdamW [72] optimizer with \\(1e - 4\\) learning rate. For extracting latent features, we employ off-the-shelf VAE encoders. To mitigate computational costs, we adopted a mixed-precision training approach. Additionally, we applied gradient clipping with a threshold of 2.0 and a weight decay of 0.01 to prevent NaN occurrences during Mamba training. Most of our experiments were conducted on 4 A100 GPUs, with scalability exploration extended to 16 and 32 A100 GPUs. For sampling, we adopt the ODE sampling for speed consideration. For further details, please refer to the Appendix 8.8." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.548, + 0.388, + 0.563 + ], + "angle": 0, + "content": "4.2 Ablation Study" + }, + { + "type": "table_caption", + "bbox": [ + 0.214, + 0.599, + 0.785, + 0.64 + ], + "angle": 0, + "content": "Table 2: Ablation about Position Embedding (PE) on unconditional CelebA dataset \\((256^{2})\\). To better abate PE and eliminate the conditional signal's influence, we use an unconditional dataset." + }, + { + "type": "table", + "bbox": [ + 0.295, + 0.654, + 0.704, + 0.707 + ], + "angle": 0, + "content": "
FID/FDD ↓No PECosine PELearnable PE
VisionMamba [123]21.33/21.0018.47/19.9016.38/18.20
ZigMa14.27/18.0014.04/17.9113.32/17.40
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.734, + 0.788, + 0.841 + ], + "angle": 0, + "content": "Scan Scheme Ablation. We provide several important findings based on our ablation studies on MultiModal-CelebA dataset in various resolutions in Table 1. Firstly, switching the scanning scheme from sweep to zigzag led to some gains. Secondly, as we increased the zigzag scheme from 1 to 8, we saw consistent gains. This indicates that alternating the scanning scheme in various blocks can be beneficial. Finally, the relative gain between Zigzag-1 and Zigzag-8 is more prominent at higher resolutions (\\(512 \\times 512\\), or longer sequence token number)" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "12" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.127 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.148, + 0.462, + 0.226 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.272, + 0.233, + 0.438, + 0.245 + ], + "angle": 0, + "content": "(a) FPS v.s. Patch Number." + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.147, + 0.756, + 0.226 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.538, + 0.232, + 0.76, + 0.246 + ], + "angle": 0, + "content": "(b) GPU Memory v.s. Patch Number." + }, + { + "type": "image", + "bbox": [ + 0.229, + 0.268, + 0.468, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.225, + 0.355, + 0.486, + 0.367 + ], + "angle": 0, + "content": "(c) Order Receptive Field v.s. GPU Memory." + }, + { + "type": "image", + "bbox": [ + 0.525, + 0.268, + 0.761, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.544, + 0.355, + 0.753, + 0.367 + ], + "angle": 0, + "content": "(d) Order Receptive Field v.s. FPS." + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.378, + 0.788, + 0.435 + ], + "angle": 0, + "content": "Figure 6: (a, b).GPU Memory usage and FPS between our method and transformer-based methods(U-VIT [9] and DiT [80]). (c). Order Receptive Field and GPU memory (d). Order Receptive Field and FPS. Order Receptive Field denotes how many scan paths we consider in our network design." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.465, + 0.785, + 0.51 + ], + "angle": 0, + "content": "compared to lower resolutions (\\(256 \\times 256\\), or shorter sequence token number), this shows the great potential and more efficient inductive-bias incorporation in longer sequence number." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.512, + 0.789, + 0.661 + ], + "angle": 0, + "content": "Ablation about Position Embedding. As shown in Table 2, the learnable embedding performs better than the Sinusoidal embedding, which in turn performs better than no position embedding. In various cases, our zigzag method surpasses the baselines. Notably, our performance remains almost unchanged whether we use the Sinusoidal position embedding or no position embedding. This suggests that our method can better incorporate spatial inductive-bias compared to our baseline. Finally, using the learnable position embedding provides further, albeit marginal, gains suggesting that better position embedding exists even within our zigzag scan scheme. We find that [79] shares the same conclusion as us in video-related tasks." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.663, + 0.789, + 0.815 + ], + "angle": 0, + "content": "Ablation study about the Network and FPS/GPU-Memory. In Figure 6 (a,b), we analyze the forward speed and GPU memory usage while varying the global patch dimensions from \\(32 \\times 32\\) to \\(196 \\times 196\\). For the speed analysis, we report Frame Per Second (FPS) instead of FLOPS, as FPS provides a more explicit and appropriate evaluation of speed2. For simplicity, we uniformly apply the zigzag-1 Mamba scan scheme and use batch size=1 and patch size=1 on an A100 GPU with 80GB memory. It's worth noting that all methods share nearly identical parameter numbers for fair comparison. We primarily compare our method with two popular transformer-based Diffusion backbones, U-ViT [9] and DiT [80]. It is evident that our method achieves the best FPS and GPU" + }, + { + "type": "page_footnote", + "bbox": [ + 0.218, + 0.825, + 0.765, + 0.84 + ], + "angle": 0, + "content": "2 https://github.com/state-spaces/mamba/issues/110#issuecomment-1916464012" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.785, + 0.127 + ], + "angle": 0, + "content": "13" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.147, + 0.788, + 0.208 + ], + "angle": 0, + "content": "utilization when gradually increasing the patching number. U-ViT demonstrates the worst performance, even exceeds the memory bounds when the patch number is 196. Surprisingly, DiT's GPU utilization is close to our method, which supports our backbone choice of DiT from a practical perspective." + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.257, + 0.493, + 0.34 + ], + "angle": 0, + "content": "Table 3: Main result on FacesHQ-1024 dataset with 4,094 tokens in latent space and \\( \\mathbf{bs} = \\mathbf{512} \\). Our method can outperform the baseline and can achieve even better results when the training scale is increased." + }, + { + "type": "table", + "bbox": [ + 0.223, + 0.354, + 0.487, + 0.422 + ], + "angle": 0, + "content": "
MethodFID5k↓FDD5k↓
VisionMamba [123]51.166.3
ZigMa37.850.5
ZigMa bs × 226.631.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.216, + 0.428, + 0.493, + 0.469 + ], + "angle": 0, + "content": "Table 5: Transformer-based methods comparison on unconditional CelebA256." + }, + { + "type": "table", + "bbox": [ + 0.227, + 0.482, + 0.482, + 0.537 + ], + "angle": 0, + "content": "
MethodFID↓Memory(G) ↓FLOPS(G) ↓
U-ViT14.5035.1012.5
DiT14.6429.205.5
ZigMa14.2717.805.2
" + }, + { + "type": "table_caption", + "bbox": [ + 0.526, + 0.251, + 0.788, + 0.333 + ], + "angle": 0, + "content": "Table 4: Main Results on MS-COCO dataset with \\( \\mathrm{bs} = {256} \\) . Our method consistently outperforms the baseline. ZigMa with 8 scans performs much better compared with the baseline." + }, + { + "type": "table", + "bbox": [ + 0.558, + 0.347, + 0.756, + 0.428 + ], + "angle": 0, + "content": "
MethodFID5k↓
Sweep195.1
Zigzag-173.1
VisionMamba [123]60.2
Zigzag-841.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.539, + 0.429, + 0.786, + 0.455 + ], + "angle": 0, + "content": "Table 6: Video Scan Scheme on UCF101 dataset with \\( \\mathrm{bs} = {32} \\) ." + }, + { + "type": "table", + "bbox": [ + 0.547, + 0.469, + 0.779, + 0.537 + ], + "angle": 0, + "content": "
MethodFrame-FID5k↓FVD5k↓
Bidirection [123]256.1320.2
3D Zigzag238.1282.3
Our216.1210.2
Bidirection [123] bs×4146.2201.1
ZigMa bs×4121.2140.1
" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.582, + 0.788, + 0.658 + ], + "angle": 0, + "content": "Order Receptive Field. We propose a new concept in Mamba-based structure for multidimensional data. Given that various spatially-continuous zigzag paths may exist in multidimensional data, we introduce the term Order Receptive Field which denotes the number of zigzag paths explicitly employed in the network design." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.666, + 0.807, + 0.773 + ], + "angle": 0, + "content": "Ablation study about the Order Receptive Field and FPS/GPU-Memory. As depicted in Fig. 6 (c,d), Zigzag Mamba consistently maintains its GPU memory consumption and FPS rate, even with a gradually increasing Order Receptive Field. In contrast, our primary baseline, Parallel Mamba, along with variants like Bidirectional Mamba and Vision Mamba [70, 123], experience a consistent decrease in FPS due to increased parameters. Notably, Zigzag Mamba, with an Order Receptive Field of 8, can perform faster without altering parameters." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.78, + 0.788, + 0.84 + ], + "angle": 0, + "content": "Comparison with transformer-based methods. We show the result in Table 5 on unconditional generation task. Our method achieves performance comparable to Transformer-based methods, with significantly less memory consumption and fewer FLOPS." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "14" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.127 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.147, + 0.365, + 0.16 + ], + "angle": 0, + "content": "4.3 Main Result" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.169, + 0.788, + 0.365 + ], + "angle": 0, + "content": "Main Result on \\(1024 \\times 1024\\) FacesHQ. To elaborate on the scalability of our method within the Mamba and Stochastic Interpolant framework, we provide comparisons on a high-resolution dataset (\\(1024 \\times 1024\\) FacesHQ) in Table 3. Our primary comparison is against Bidirectional Mamba, a commonly used solution for applying Mamba to 2D image data [70, 123]. With the aim of investigating Mamba's scalability in large resolutions up to 1,024, we employ the diffusion model on the latent space of \\(128 \\times 128\\) with a patch size of 2, resulting in 4,096 tokens. The network is trained on 16 A100 GPUs. Notably, our method demonstrates superior results compared to Bidirectional Mamba. Details regarding loss, FID curves, and visualization can be found in the Appendix. While constrained by GPU resource limitations, preventing longer training duration, we anticipate consistent outperformance of Bidirectional Mamba with extended training duration." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.366, + 0.788, + 0.502 + ], + "angle": 0, + "content": "COCO dataset. To further compare the performance of our method, we also evaluate it on the more complex and common dataset MS COCO. We compare with the Bidirection Mamba as the baseline in Table 4. It should be noted that all methods share nearly identical parameter numbers for fair comparison. We trained all methods using 16 A100 GPUs. please check Appendix 8.8 for details. As depicted in Table 4, our Zigzag-8 method outperforms Bidirectional Mamba as well as Zigzag-1. This suggests that amortizing various scanning schemes can yield significant improvements, attributed to better incorporation of the inductive bias for 2D images in Mamba." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.502, + 0.788, + 0.655 + ], + "angle": 0, + "content": "UCF101 dataset. In Table 6, we present our results on the UCF101 dataset, training all methods using 4 A100 GPUs, with further scalability exploration conducted using 16 A100 GPUs. We mainly compare our method consistently with Vision Mamba [123]. For the choice of the 3D Zigzag Mamba, please refer to Appendix 8.8. For Factorized 3D Zigzag Mamba in video processing, we deploy the sst scheme for factorizing spatial and temporal modeling. This scheme prioritizes spatial information complexity over temporal information, hypothesizing that redundancy exists in the temporal domain. Our results consistently demonstrate the superior performance of our method across various scenarios, underscoring the intricacy and effectiveness of our approach." + }, + { + "type": "title", + "bbox": [ + 0.216, + 0.675, + 0.36, + 0.691 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.704, + 0.788, + 0.841 + ], + "angle": 0, + "content": "In this paper, we present the Zigzag Mamba Diffusion Model, developed within the Stochastic Interpolant framework. Our initial focus is on addressing the critical issue of spatial continuity. We then devise a Zigzag Mamba block with heterogeneous layerwise scan to better utilize the inductive bias in 2D images. Further, we factorize the 3D Mamba into 2D and 1D Zigzag Mamba to facilitate optimization. We empirically design various ablation studies to examine different factors. This approach allows for a more in-depth exploration of the Stochastic Interpolant theory. We hope our endeavor can inspire further exploration in the Mamba network design." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.117, + 0.785, + 0.127 + ], + "angle": 0, + "content": "15" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.145, + 0.403, + 0.163 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.178, + 0.788, + 0.299 + ], + "angle": 0, + "content": "This project has been supported by the German Federal Ministry for Economic Affairs and Climate Action within the project \"NXT GEN AI METHODS - Generative Methoden für Perzeption, Prädiktion und Planung\", the bidt project KLIMA-MEMES, Bayer AG, and the German Research Foundation (DFG) project 421703927. The authors gratefully acknowledge the Gauss Center for Supercomputing for providing compute through the NIC on JUWELS at JSC and the HPC resources supplied by the Erlangen National High Performance Computing Center (NHR@FAU funded by DFG)." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.323, + 0.323, + 0.338 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.354, + 0.787, + 0.381 + ], + "angle": 0, + "content": "1. Agarwal, N., Suo, D., Chen, X., Hazan, E.: Spectral state space models. arXiv (2023) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.383, + 0.787, + 0.41 + ], + "angle": 0, + "content": "2. Ahamed, M.A., Cheng, Q.: Mambatab: A simple yet effective approach for handling tabular data. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.41, + 0.787, + 0.437 + ], + "angle": 0, + "content": "3. Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E.: Stochastic interpolants: A unifying framework for flows and diffusions. arXiv (2023) 2, 4, 9, 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.438, + 0.787, + 0.465 + ], + "angle": 0, + "content": "4. Albergo, M.S., Vanden-Eijnden, E.: Building normalizing flows with stochastic interpolants. arXiv (2022) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.466, + 0.787, + 0.493 + ], + "angle": 0, + "content": "5. Ali, A., Zimerman, I., Wolf, L.: The hidden attention of mamba models. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.494, + 0.787, + 0.521 + ], + "angle": 0, + "content": "6. Anderson, B.D.: Reverse-time diffusion equation models. Stochastic Processes and their Applications (1982) 9" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.521, + 0.787, + 0.549 + ], + "angle": 0, + "content": "7. Anthony, Q., Tokpanov, Y., Glorioso, P., Millidge, B.: Blackmamba: Mixture of experts for state-space models. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.549, + 0.787, + 0.591 + ], + "angle": 0, + "content": "8. Ao, S., Zhao, W., Han, X., Yang, C., Liu, Z., Shi, C., Sun, M., Wang, S., Su, T.: Burstattention: An efficient distributed attention framework for extremely long sequences. arXiv (2024) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.231, + 0.591, + 0.787, + 0.618 + ], + "angle": 0, + "content": "9. Bao, F., Li, C., Cao, Y., Zhu, J.: All are worth words: a vit backbone for score-based diffusion models. CVPR (2023) 1, 3, 5, 12, 23" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.619, + 0.787, + 0.659 + ], + "angle": 0, + "content": "10. Bao, F., Nie, S., Xue, K., Li, C., Pu, S., Wang, Y., Yue, G., Cao, Y., Su, H., Zhu, J.: One transformer fits all distributions in multi-modal diffusion at scale. arXiv (2023) 1, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.66, + 0.787, + 0.702 + ], + "angle": 0, + "content": "11. Beck, M., Poppel, K., Spanring, M., Auer, A., Prudnikova, O., Kopp, M., Klambauer, G., Brandstetter, J., Hochreiter, S.: xlstm: Extended long short-term memory (2024) 22" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.702, + 0.787, + 0.729 + ], + "angle": 0, + "content": "12. Behrouz, A., Hashemi, F.: Graph mamba: Towards learning on graphs with state space models. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.73, + 0.787, + 0.756 + ], + "angle": 0, + "content": "13. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv (2020) 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.758, + 0.787, + 0.799 + ], + "angle": 0, + "content": "14. Ben-Hamu, H., Cohen, S., Bose, J., Amos, B., Grover, A., Nickel, M., Chen, R.T., Lipman, Y.: Matching normalizing flows and probability paths on manifolds. In: ICML (2022) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.799, + 0.787, + 0.84 + ], + "angle": 0, + "content": "15. Brandon, W., Nrusimha, A., Qian, K., Ankner, Z., Jin, T., Song, Z., Ragan-Kelley, J.: Striped attention: Faster ring attention for causal transformers. arXiv preprint arXiv:2311.09431 (2023) 2" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.354, + 0.787, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "16" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.127 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.147, + 0.784, + 0.175 + ], + "angle": 0, + "content": "16. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: CVPR (2021) 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.177, + 0.785, + 0.204 + ], + "angle": 0, + "content": "17. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. NeurIPS (2018) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.205, + 0.785, + 0.245 + ], + "angle": 0, + "content": "18. Chen, S., Xu, M., Ren, J., Cong, Y., He, S., Xie, Y., Sinha, A., Luo, P., Xiang, T., Perez-Rua, J.M.: Gentron: Delving deep into diffusion transformers for image and video generation. arXiv (2023) 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.247, + 0.785, + 0.274 + ], + "angle": 0, + "content": "19. Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv (2019) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.276, + 0.785, + 0.316 + ], + "angle": 0, + "content": "20. Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al.: Rethinking attention with performers. arXiv (2020) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.318, + 0.785, + 0.358 + ], + "angle": 0, + "content": "21. Crowson, K., Baumann, S.A., Birch, A., Abraham, T.M., Kaplan, D.Z., Shippole, E.: Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers. arXiv (2024) 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.36, + 0.785, + 0.387 + ], + "angle": 0, + "content": "22. Dao, Q., Phung, H., Nguyen, B., Tran, A.: Flow matching in latent space. arXiv (2023) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.389, + 0.785, + 0.415 + ], + "angle": 0, + "content": "23. Dao, T., Fu, D., Ermon, S., Rudra, A., Ré, C.: Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS (2022) 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.417, + 0.785, + 0.457 + ], + "angle": 0, + "content": "24. Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A.P., Caron, M., Geirhos, R., Alabdulmohsin, I., et al.: Scaling vision transformers to 22 billion parameters. In: ICML (2023) 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.459, + 0.785, + 0.5 + ], + "angle": 0, + "content": "25. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021) 23, 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.502, + 0.785, + 0.528 + ], + "angle": 0, + "content": "26. Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: CVPR (2021) 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.53, + 0.785, + 0.556 + ], + "angle": 0, + "content": "27. Fei, Z., Fan, M., Yu, C., Huang, J.: Scalable diffusion models with state space backbone. arXiv (2024) 3, 4, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.558, + 0.785, + 0.585 + ], + "angle": 0, + "content": "28. Fischer, J.S., Gui, M., Ma, P., Stracke, N., Baumann, S.A., Ommer, B.: Boosting latent diffusion with flow matching. ECCV (2024) 4, 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.587, + 0.785, + 0.613 + ], + "angle": 0, + "content": "29. Fu, D.Y., Dao, T., Saab, K.K., Thomas, A.W., Rudra, A., Ré, C.: Hungry hungry hippos: Towards language modeling with state space models. arXiv (2022) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.615, + 0.785, + 0.642 + ], + "angle": 0, + "content": "30. Fuest, M., Ma, P., Gui, M., Fischer, J.S., Hu, V.T., Ommer, B.: Diffusion models and representation learning: A survey. arXiv preprint arXiv:2407.00783 (2024) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.643, + 0.785, + 0.683 + ], + "angle": 0, + "content": "31. Gong, H., Kang, L., Wang, Y., Wan, X., Li, H.: nnmamba: 3d biomedical image segmentation, classification and landmark detection with state space model. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.686, + 0.785, + 0.712 + ], + "angle": 0, + "content": "32. Gong, J., Foo, L.G., Fan, Z., Ke, Q., Rahmani, H., Liu, J.: Diffpose: Toward more reliable 3d pose estimation. In: CVPR (2023) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.714, + 0.785, + 0.741 + ], + "angle": 0, + "content": "33. Gu, A., Dao, T.: Mamba: Linear-time sequence modeling with selective state spaces. CoLM (2024) 2, 3, 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.743, + 0.785, + 0.769 + ], + "angle": 0, + "content": "34. Gu, A., Goel, K., Gupta, A., Ré, C.: On the parameterization and initialization of diagonal state space models. NeurIPS (2022) 2, 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.771, + 0.785, + 0.797 + ], + "angle": 0, + "content": "35. Gu, A., Goel, K., Ré, C.: Efficiently modeling long sequences with structured state spaces (2021) 2, 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.799, + 0.785, + 0.84 + ], + "angle": 0, + "content": "36. Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., Ré, C.: Combining recurrent, convolutional, and continuous-time models with linear state space layers. NeurIPS (2021) 2, 5" + }, + { + "type": "list", + "bbox": [ + 0.225, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "17" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.147, + 0.785, + 0.189 + ], + "angle": 0, + "content": "37. Gui, M., Fischer, J.S., Prestel, U., Ma, P., Kotovenko, D., Grebenkova, O., Baumann, S.A., Hu, V.T., Ommer, B.: Depthfm: Fast monocular depth estimation with flow matching. arXiv preprint arXiv:2403.13788 (2024) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.19, + 0.785, + 0.217 + ], + "angle": 0, + "content": "38. Guo, H., Li, J., Dai, T., Ouyang, Z., Ren, X., Xia, S.T.: Mambair: A simple baseline for image restoration with state-space model. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.218, + 0.785, + 0.244 + ], + "angle": 0, + "content": "39. Gupta, A., Gu, A., Berant, J.: Diagonal state spaces are as effective as structured state spaces. NeurIPS (2022) 2, 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.245, + 0.785, + 0.285 + ], + "angle": 0, + "content": "40. He, W., Han, K., Tang, Y., Wang, C., Yang, Y., Guo, T., Wang, Y.: Densemamba: State space models with dense hidden connection for efficient large language models. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.287, + 0.785, + 0.314 + ], + "angle": 0, + "content": "41. He, X., Cao, K., Yan, K., Li, R., Xie, C., Zhang, J., Zhou, M.: Pan-mamba: Effective pan-sharpening with state space model. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.315, + 0.785, + 0.342 + ], + "angle": 0, + "content": "42. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv (2022) 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.343, + 0.785, + 0.369 + ], + "angle": 0, + "content": "43. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: NeurIPS (2020) 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.37, + 0.785, + 0.396 + ], + "angle": 0, + "content": "44. Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., Fleet, D.J.: Video diffusion models. In: ARXIV (2022) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.398, + 0.785, + 0.424 + ], + "angle": 0, + "content": "45. Hu, V.T., Chen, Y., Caron, M., Asano, Y.M., Snoek, C.G., Ommer, B.: Guided diffusion from self-supervised diffusion features. In: ARXIV (2023) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.425, + 0.785, + 0.466 + ], + "angle": 0, + "content": "46. Hu, V.T., Wu, D., Asano, Y., Mettes, P., Fernando, B., Ommer, B., Snoek, C.: Flow matching for conditional text generation in a few sampling steps pp. 380-392 (2024) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.467, + 0.785, + 0.508 + ], + "angle": 0, + "content": "47. Hu, V.T., Yin, W., Ma, P., Chen, Y., Fernando, B., Asano, Y.M., Gavves, E., Mettes, P., Ommer, B., Snoek, C.G.: Motion flow matching for human motion synthesis and editing. In: ARXIV (2023) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.509, + 0.785, + 0.536 + ], + "angle": 0, + "content": "48. Hu, V.T., Zhang, D.W., Asano, Y.M., Burghouts, G.J., Snoek, C.G.M.: Self-guided diffusion models. In: CVPR (2023) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.537, + 0.785, + 0.577 + ], + "angle": 0, + "content": "49. Hu, V.T., Zhang, D.W., Mettes, P., Tang, M., Zhao, D., Snoek, C.G.: Latent space editing in transformer-based flow matching. In: ICML 2023 Workshop, New Frontiers in Learning, Control, and Dynamical Systems (2023) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.578, + 0.785, + 0.605 + ], + "angle": 0, + "content": "50. Huang, Z., Zhou, P., Yan, S., Lin, L.: Scalelong: Towards more stable training of diffusion model via scaling network long skip connection. NeurIPS (2024) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.606, + 0.785, + 0.646 + ], + "angle": 0, + "content": "51. Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., Fu, B.: Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650 (2021) 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.647, + 0.785, + 0.674 + ], + "angle": 0, + "content": "52. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. In: NeurIPS (2022) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.675, + 0.785, + 0.702 + ], + "angle": 0, + "content": "53. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019) 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.702, + 0.785, + 0.73 + ], + "angle": 0, + "content": "54. Kingma, D., Salimans, T., Poole, B., Ho, J.: Variational diffusion models. In: NeurIPS (2021) 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.73, + 0.785, + 0.757 + ], + "angle": 0, + "content": "55. Kingma, D.P., Gao, R.: Understanding the diffusion objective as a weighted integral of ellb. arXiv (2023) 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.758, + 0.785, + 0.784 + ], + "angle": 0, + "content": "56. Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: The efficient transformer. arXiv (2020) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.785, + 0.785, + 0.812 + ], + "angle": 0, + "content": "57. Lee, S., Kim, B., Ye, J.C.: Minimizing trajectory curvature of ode-based generative models. ICML (2023) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "58. Li, K., Li, X., Wang, Y., He, Y., Wang, Y., Wang, L., Qiao, Y.: Videomamba: State space model for efficient video understanding. ECCV (2024) 3" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "18" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.127 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.147, + 0.784, + 0.175 + ], + "angle": 0, + "content": "59. Li, S., Singh, H., Grover, A.: Mamba-nd: Selective state space modeling for multidimensional data. arXiv (2024) 3, 28, 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.177, + 0.785, + 0.204 + ], + "angle": 0, + "content": "60. Li, Y., Bornschein, J., Chen, T.: Denoising autoregressive representation learning. arXiv preprint arXiv:2403.05196 (2024) 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.205, + 0.785, + 0.247 + ], + "angle": 0, + "content": "61. Liang, D., Zhou, X., Wang, X., Zhu, X., Xu, W., Zou, Z., Ye, X., Bai, X.: Pointmamba: A simple state space model for point cloud analysis. arXiv preprint arXiv:2402.10739 (2024) 3, 27, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.248, + 0.785, + 0.288 + ], + "angle": 0, + "content": "62. Lin, B., Jiang, W., Chen, P., Zhang, Y., Liu, S., Chen, Y.C.: Mtmamba: Enhancing multi-task dense scene understanding by mamba-based decoders. ECCV (2024) 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.289, + 0.785, + 0.317 + ], + "angle": 0, + "content": "63. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) 30" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.318, + 0.785, + 0.346 + ], + "angle": 0, + "content": "64. Lipman, Y., Chen, R.T., Ben-Hamu, H., Nickel, M., Le, M.: Flow matching for generative modeling. ICLR (2023) 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.346, + 0.785, + 0.373 + ], + "angle": 0, + "content": "65. Liu, G.H., Chen, T., So, O., Theodorou, E.: Deep generalized schrödinger bridge. NeurIPS (2022) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.374, + 0.785, + 0.402 + ], + "angle": 0, + "content": "66. Liu, H., Zaharia, M., Abbeel, P.: Ring attention with blockwise transformers for near-infinite context. arXiv (2023) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.402, + 0.785, + 0.444 + ], + "angle": 0, + "content": "67. Liu, J., Yang, H., Zhou, H.Y., Xi, Y., Yu, L., Yu, Y., Liang, Y., Shi, G., Zhang, S., Zheng, H., et al.: Swin-umamba: Mamba-based unet withImagenet-based pretraining. arXiv (2024) 2, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.445, + 0.785, + 0.472 + ], + "angle": 0, + "content": "68. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv (2022) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.473, + 0.785, + 0.501 + ], + "angle": 0, + "content": "69. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. ICLR (2023) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.502, + 0.785, + 0.529 + ], + "angle": 0, + "content": "70. Liu, Y., Tian, Y., Zhao, Y., Yu, H., Xie, L., Wang, Y., Ye, Q., Liu, Y.: Vmamba: Visual state space model. arXiv (2024) 2, 3, 5, 6, 7, 13, 14, 28, 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.53, + 0.785, + 0.57 + ], + "angle": 0, + "content": "71. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV (2021) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.571, + 0.785, + 0.598 + ], + "angle": 0, + "content": "72. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2019) 11" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.6, + 0.785, + 0.628 + ], + "angle": 0, + "content": "73. Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv (2024) 2, 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.629, + 0.785, + 0.67 + ], + "angle": 0, + "content": "74. Ma, N., Goldstein, M., Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E., Xie, S.: Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. arXiv (2024) 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.671, + 0.785, + 0.699 + ], + "angle": 0, + "content": "75. McKenna, D.M.: Hilbert curves: Outside-in and inside-gone. Mathemaesthetics, Inc (2019) 7, 26" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.7, + 0.785, + 0.727 + ], + "angle": 0, + "content": "76. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: ECCV (2016) 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.728, + 0.785, + 0.769 + ], + "angle": 0, + "content": "77. Nguyen, E., Goel, K., Gu, A., Downs, G., Shah, P., Dao, T., Baccus, S., Ré, C.: S4nd: Modeling images and videos as multidimensional signals with state spaces. NeurIPS (2022) 3, 28, 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.77, + 0.744, + 0.784 + ], + "angle": 0, + "content": "78. OpenAI: Sora: Creating video from text (2024), https://openai.com/sora 1, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.785, + 0.785, + 0.812 + ], + "angle": 0, + "content": "79. Park, J., Kim, H.S., Ko, K., Kim, M., Kim, C.: Videomamba: Spatio-temporal selective state space model. ECCV (2024) 3, 12" + }, + { + "type": "ref_text", + "bbox": [ + 0.225, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "80. Peebles, W., Xie, S.: Scalable diffusion models with transformers. arXiv (2022) 1, 3, 5, 12, 23" + }, + { + "type": "list", + "bbox": [ + 0.225, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.128 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.769, + 0.116, + 0.786, + 0.127 + ], + "angle": 0, + "content": "19" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.147, + 0.786, + 0.203 + ], + "angle": 0, + "content": "81. Peng, B., Goldstein, D., Anthony, Q., Albalak, A., Alcaide, E., Biderman, S., Cheah, E., Ferdinan, T., Hou, H., Kazienko, P., et al.: Eagle and finch: Rwkv with matrix-valued states and dynamic recurrence. arXiv preprint arXiv:2404.05892 (2024) 22" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.204, + 0.786, + 0.231 + ], + "angle": 0, + "content": "82. Qin, Z., Yang, S., Sun, W., Shen, X., Li, D., Sun, W., Zhong, Y.: Hgrn2: Gated linear rnns with state expansion. arXiv preprint arXiv:2404.07904 (2024) 22" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.232, + 0.786, + 0.272 + ], + "angle": 0, + "content": "83. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021) 30" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.273, + 0.786, + 0.3 + ], + "angle": 0, + "content": "84. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022) 1, 3, 30" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.3, + 0.786, + 0.328 + ], + "angle": 0, + "content": "85. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI (2015) 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.328, + 0.786, + 0.355 + ], + "angle": 0, + "content": "86. Ruan, J., Xiang, S.: Vm-unet: Vision mamba unet for medical image segmentation. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.356, + 0.786, + 0.383 + ], + "angle": 0, + "content": "87. Skorokhodov, I., Sotnikov, G., Elhoseiny, M.: Aligning latent and image spaces to connect the unconnectable. In: ICCV (2021) 34" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.384, + 0.786, + 0.411 + ], + "angle": 0, + "content": "88. Smith, J.T., Warrington, A., Linderman, S.W.: Simplified state space layers for sequence modeling. arXiv (2022) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.411, + 0.786, + 0.438 + ], + "angle": 0, + "content": "89. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: ICML (2015) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.439, + 0.786, + 0.466 + ], + "angle": 0, + "content": "90. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. arXiv (2019) 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.467, + 0.786, + 0.508 + ], + "angle": 0, + "content": "91. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: ICLR (2021) 2, 4, 9, 10" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.508, + 0.786, + 0.563 + ], + "angle": 0, + "content": "92. Stein, G., Cresswell, J., Hosseinzadeh, R., Sui, Y., Ross, B., Villecloze, V., Liu, Z., Caterini, A.L., Taylor, E., Loaiza-Ganem, G.: Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. NeurIPS (2023) 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.564, + 0.786, + 0.589 + ], + "angle": 0, + "content": "93. Sun, Z., Yang, Y., Yoo, S.: Sparse attention with learning to hash. In: ICLR (2021) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.591, + 0.786, + 0.632 + ], + "angle": 0, + "content": "94. Tang, R., Liu, L., Pandey, A., Jiang, Z., Yang, G., Kumar, K., Stenetorp, P., Lin, J., Ture, F.: What the daam: Interpreting stable diffusion using cross attention. arXiv (2022) 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.633, + 0.786, + 0.66 + ], + "angle": 0, + "content": "95. Tikochinski, R., Goldstein, A., Meiri, Y., Hasson, U., Reichart, R.: An incremental large language model for long text processing in the brain (2024) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.66, + 0.786, + 0.702 + ], + "angle": 0, + "content": "96. Tong, A., Malkin, N., Fatras, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., Bengio, Y.: Simulation-free schr\\'' odinger bridges via score and flow matching. arXiv (2023) 9" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.702, + 0.786, + 0.73 + ], + "angle": 0, + "content": "97. Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., Gelly, S.: Fvd: A new metric for video generation. ICLR Workshop (2019) 30" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.73, + 0.786, + 0.757 + ], + "angle": 0, + "content": "98. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 27" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.758, + 0.786, + 0.785 + ], + "angle": 0, + "content": "99. Wang, C., Tsepa, O., Ma, J., Wang, B.: Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.785, + 0.786, + 0.812 + ], + "angle": 0, + "content": "00. Wang, J., Gangavarapu, T., Yan, J.N., Rush, A.M.: Mambabyte: Token-free selective state space model. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.226, + 0.812, + 0.786, + 0.84 + ], + "angle": 0, + "content": "01. Wang, J., Yan, J.N., Gu, A., Rush, A.M.: Pretraining without attention. arXiv (2022) 6" + }, + { + "type": "list", + "bbox": [ + 0.226, + 0.147, + 0.786, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.218, + 0.116, + 0.236, + 0.127 + ], + "angle": 0, + "content": "20" + }, + { + "type": "header", + "bbox": [ + 0.272, + 0.115, + 0.333, + 0.127 + ], + "angle": 0, + "content": "Hu et al." + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.147, + 0.785, + 0.175 + ], + "angle": 0, + "content": "102. Wang, S., Li, Q.: Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization. arXiv (2023) 2, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.176, + 0.785, + 0.202 + ], + "angle": 0, + "content": "103. Wang, S., Xue, B.: State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. NeurIPS (2024) 2, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.203, + 0.785, + 0.242 + ], + "angle": 0, + "content": "104. Wang, W., Ma, S., Xu, H., Usuyama, N., Ding, J., Poon, H., Wei, F.: When an image is worth 1,024 x 1,024 words: A case study in computational pathology. arXiv (2023) 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.243, + 0.785, + 0.297 + ], + "angle": 0, + "content": "105. Wang, X., Wang, S., Ding, Y., Li, Y., Wu, W., Rong, Y., Kong, W., Huang, J., Li, S., Yang, H., Wang, Z., Jiang, B., Li, C., Wang, Y., Tian, Y., Tang, J.: State space model for new-generation network alternative to transformers: A survey (2024) 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.298, + 0.785, + 0.325 + ], + "angle": 0, + "content": "106. Wang, X., Kang, Z., Mu, Y.: Text-controlled motion mamba: Text-instructed temporal grounding of human motion. arXiv preprint arXiv:2404.11375 (2024) 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.326, + 0.785, + 0.365 + ], + "angle": 0, + "content": "107. Wang, Z., Ma, C.: Semi-mamba-unet: Pixel-level contrastive cross-supervised visual mamba-based unet for semi-supervised medical image segmentation. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.366, + 0.785, + 0.393 + ], + "angle": 0, + "content": "108. Wang, Z., Zheng, J.Q., Zhang, Y., Cui, G., Li, L.: Mamba-unet: Unet-like pure visual mamba for medical image segmentation. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.394, + 0.785, + 0.433 + ], + "angle": 0, + "content": "109. Wu, L., Wang, D., Gong, C., Liu, X., Xiong, Y., Ranjan, R., Krishnamoorthi, R., Chandra, V., Liu, Q.: Fast point cloud generation with straight flows. In: CVPR (2023) 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.434, + 0.785, + 0.46 + ], + "angle": 0, + "content": "110. Xia, W., Yang, Y., Xue, J.H., Wu, B.: Tedigan: Text-guided diverse face image generation and manipulation. In: CVPR (2021) 10, 30" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.461, + 0.785, + 0.487 + ], + "angle": 0, + "content": "111. Xing, Z., Ye, T., Yang, Y., Liu, G., Zhu, L.: Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. arXiv (2024) 3, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.488, + 0.785, + 0.513 + ], + "angle": 0, + "content": "112. Yan, J.N., Gu, J., Rush, A.M.: Diffusion models without attention. arXiv (2023) 4, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.515, + 0.785, + 0.541 + ], + "angle": 0, + "content": "113. Yang, S., Wang, B., Shen, Y., Panda, R., Kim, Y.: Gated linear attention transformers with hardware-efficient training. ICML (2024) 22" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.542, + 0.785, + 0.581 + ], + "angle": 0, + "content": "114. Yang, S., Zhang, Y.: Fla: A triton-based library for hardware-efficient implementations of linear attention mechanism (Jan 2024), https://github.com/sustcsonglin/flashlinear-attention_22" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.582, + 0.785, + 0.609 + ], + "angle": 0, + "content": "115. Yang, Y., Xing, Z., Zhu, L.: Vivim: a video vision mamba for medical video object segmentation. arXiv (2024) 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.61, + 0.785, + 0.65 + ], + "angle": 0, + "content": "116. Yu, A., Nigmatov, A., Morozov, D., Mahoney, M.W., Erichson, N.B.: Robustifying state-space models for long sequences via approximate diagonalization. arXiv (2023) 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.651, + 0.785, + 0.677 + ], + "angle": 0, + "content": "117. Yu, S., Sohn, K., Kim, S., Shin, J.: Video probabilistic diffusion models in projected latent space. In: CVPR (2023) 30" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.678, + 0.785, + 0.704 + ], + "angle": 0, + "content": "118. Zhang, T., Li, X., Yuan, H., Ji, S., Yan, S.: Point could mamba: Point cloud learning via state space model. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.705, + 0.785, + 0.731 + ], + "angle": 0, + "content": "119. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: CVPR (2018) 29" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.732, + 0.785, + 0.771 + ], + "angle": 0, + "content": "120. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. ECCV (2024) 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.772, + 0.785, + 0.812 + ], + "angle": 0, + "content": "121. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. arXiv (2024) 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.218, + 0.813, + 0.785, + 0.84 + ], + "angle": 0, + "content": "122. Zheng, Z., Wu, C.: U-shaped vision mamba for single image dehazing. arXiv (2024) 3, 28" + }, + { + "type": "list", + "bbox": [ + 0.218, + 0.147, + 0.785, + 0.84 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "header", + "bbox": [ + 0.686, + 0.115, + 0.732, + 0.129 + ], + "angle": 0, + "content": "ZigMa" + }, + { + "type": "page_number", + "bbox": [ + 0.768, + 0.116, + 0.784, + 0.127 + ], + "angle": 0, + "content": "21" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.147, + 0.788, + 0.189 + ], + "angle": 0, + "content": "123. Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., Wang, X.: Vision mamba: Efficient visual representation learning with bidirectional state space model. ICML (2024) 2, 3, 5, 7, 11, 13, 14, 28" + }, + { + "type": "ref_text", + "bbox": [ + 0.217, + 0.19, + 0.788, + 0.215 + ], + "angle": 0, + "content": "124. zhuzilin: Ring flash attention. https://github.com/zhuzilin/ring-flash-attention (2024) 2" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.147, + 0.788, + 0.215 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_origin.pdf b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..070c73e3aa8a5cfab5ed3b556bbeb0ac2b50981a --- /dev/null +++ b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/ecacef5c-68d0-49cd-8f29-c5c83b5aa09b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c812f06fb565c36e5ca36318c9948cf1e871b9c499edf4b4c1b71740d275321d +size 3847742 diff --git a/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/full.md b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/full.md new file mode 100644 index 0000000000000000000000000000000000000000..175433acdf5c0b5274f2907d8022056fb465de09 --- /dev/null +++ b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/full.md @@ -0,0 +1,405 @@ +Vincent Tao Hu, Stefan Andreas Baumann, Ming Gui, Olga Grebenkova, Pingchuan Ma, Johannes Fischer, and Björn Ommer + +CompVis @ LMU Munich, MCML https://compvis.github.io/zigma/ + +Abstract The diffusion model has long been plagued by scalability and quadratic complexity issues, especially within transformer-based structures. In this study, we aim to leverage the long sequence modeling capability of a State-Space Model called Mamba to extend its applicability to visual data generation. Firstly, we identify a critical oversight in most current Mamba-based vision methods, namely the lack of consideration for spatial continuity in the scan scheme of Mamba. Secondly, building upon this insight, we introduce Zigzag Mamba, a simple, plug-and-play, minimal-parameter burden, DiT style solution, which outperforms Mamba-based baselines and demonstrates improved speed and memory utilization compared to transformer-based baselines, also this heterogeneous layerwise scan enables zero memory and speed burden when we consider more scan paths. Lastly, we integrate Zigzag Mamba with the Stochastic Interpolant framework to investigate the scalability of the model on large-resolution visual datasets, such as FacesHQ $1024 \times 1024$ and UCF101, MultiModal-CelebA-HQ, and MS COCO $256 \times 256$ . + +Keywords: Diffusion Model $\cdot$ State-Space Model $\cdot$ Stochastic Interpolants + +# 1 Introduction + +Diffusion models have demonstrated significant advancements across various applications, including image processing [45, 48, 84], video analysis [44], point cloud processing [109], representation learning [30] and human pose estimation [32]. Many of these models are built upon Latent Diffusion Models (LDM) [84], which are typically based on the UNet backbone. However, scalability remains a significant challenge in LDMs [50]. Recently, transformer-based structures have gained popularity due to their scalability [9, 80] and effectiveness in multi-modal training [10]. Notably, the transformer-based structure DiT [80] has even contributed to enhancing the high-fidelity video generation model SORA [78] by OpenAI. Despite efforts to alleviate the quadratic complexity of the attention mechanism through techniques such as windowing [71], sliding [13], sparsification [19, 56], + +- hashing [20, 93], Ring Attention [15, 66], Flash Attention [23] or a combination of them [8, 124], it remains a bottleneck for diffusion models. + +On the other hand, State-Space Models [34, 35, 39] have demonstrated significant potential for long sequence modeling, rivaling transformer-based methods. Their biological similarity [95] and efficient memory state also advocate for the use of the State-Space model over the transformer. Several methods [29, 33, 35, 88] have been proposed to enhance the robustness [116], scalability [33], and efficiency [35, 36] of State-Space Models. Among these, a method called Mamba [33] aims to alleviate these issues through work-efficient parallel scanning and other data-dependent innovations. However, the advantage of Mamba lies in 1D sequence modeling, and extending it to 2D images is a challenging question. Previous works [70, 123] have proposed flattening 2D tokens directly by computer hierarchy such as row-and-column-major order, but this approach neglects Spatial Continuity, as shown in Figure 1. Other works [67, 73] consider various directions in a single Mamba block, but this introduces additional parameters and GPU memory burden. In this paper, we aim to emphasize the importance of Spatial Continuity in Mamba and propose several intuitive and simple methods to enable the application of Mamba blocks to 2D images by incorporating continuity-based inductive biases in images. We also generalize these methods to 3D with spatial-temporal factorization on 3D sequence. + +In the end, Stochastic Interpolant [3] provides a more generalized framework that can uniform various generative models including, Normalizing Flow [17], diffusion model [43,89,91], Flow matching [4,64,69], and Schrödinger Bridge [65]. Previously, some works [74] explore the Stochastic Interpolant on relatively small resolutions, e.g., $256 \times 256$ , $512 \times 512$ . In this work, we aim to explore it in further more complex scenarios e.g., $1024 \times 1024$ resolution and even in videos. + +In summary, our contributions are as follows: Firstly, we identify the critical issue of Spatial Continuity in generalizing the Mamba block from 1D sequence modeling to 2D image and 3D video modeling. Building on this insight, we propose a simple, plug-and-play, zero-parameter heterogeneous layerwise scan paradigm named Zigzag Mamba (ZigMa) that leverages spatial continuity to maximally incorporate the inductive bias from visual data. Secondly, we extend the methodology from 2D to 3D by factorizing the spatial and temporal sequences to optimize performance. Secondly, we provide comprehensive analysis surrounding the Mamba block within the regime of diffusion models. Lastly, we demonstrate that our designed Zigzag Mamba outperforms related Mamba-based baselines, representing the first exploration of Stochastic Interpolants on large-scale image data $(1024\times 1024)$ and videos. + +# 2 Related Works + +Mamba. Several works [102, 103, 103] have demonstrated that the State-Space Model possesses universal approximation ability under certain conditions. Mamba, as a new State-Space Model, has superior potential for modeling long sequences efficiently, which has been explored in various fields such as medical imag- + +![](images/cac781c348d71fd43da8f1e4c58e7d32975e0218880efb91f784ab995d41237f.jpg) +Figure 1: Motivation. Our Zigzag Mamba method improves the network's position-awareness by arranging and rearranging the scan path of Mamba in a heuristic manner. + +ing [73, 86, 108, 111], video [58, 79], image restoration [38, 122], graphs [12], NLP word byte [100], tabular data [2], point clouds [61], human motion [106, 120], multi-task [62] and image generation [27]. Among them, the most related to us are VisionMamba [70, 123], S4ND [77] and Mamba-ND [59]. VisionMamba [70, 123] uses a bidirectional SSM in discriminative tasks which incurs a high computational cost. Our method applies a simple alternative mamba diffusion in generative models. S4ND [77] introduces local convolution into Mamba's reasoning process, moving beyond the use of only 1D data. Mamba-ND [59] takes multi-dimensionality into account in discriminative tasks, making use of various scans within a single block. In contrast, our focus is on distributing scan complexity across every layer of the network, thus maximizing the incorporation of inductive bias from visual data with zero parameter burden. Scan curve is an important direction in SSM, PointMamba [61] is a representative work that employs SSM with space curves (e.g., Hilbert) for point cloud analysis, achieving remarkable performance. In contrast with them, our preliminary results show that the Hilbert curve doesn't work well with our method (see Appendix), while our method can be regarded as the simplest Peano curve. For more information related to Mamba's work, please refer to the survey [105]. + +Backbones in Diffusion Models. Diffusion models primarily employ UNet-based [43, 84] and ViT-based [9, 80] backbones. While UNet is known for high memory demands [84], ViT benefits from scalability [18, 24] and multi-modal learning [10]. However, ViT's quadratic complexity limits visual token processing, prompting studies towards mitigating this issue [13, 23, 104]. Our work, inspired by Mamba [33], explores an SSM-based model as a generic diffusion backbone, retaining ViT's modality-agnostic and sequential modeling advantages. + +Concurrently, DiffSSM [112] concentrates on unconditional and class conditioning within the S4 model [35]. DIS [27] mainly explores the state-space model on a relatively small resolution, which is not the exact focus of our work. Our work significantly differs from theirs as it primarily focuses on the backbone design using the Mamba block and extends it to text conditioning. Furthermore, we apply our method to more complex visual data. + +SDE and ODE in Diffusion models. The realm of Score-based Generative Models encompasses significant contributions from foundational works such as Score Matching with Langevin Dynamics (SMLD) by Song et al. [90], and the advent of Diffusion Models with Denoising Score Matching (DDPMs) proposed by Ho et al. [43]. These methodologies operate within the framework of Stochastic Differential Equations (SDEs), a concept further refined in the research of Song et al. [91]. Recent research strides, as exemplified by Karras et al. [52] and Lee et al. [57], have showcased the efficacy of employing Ordinary Differential Equation (ODE) samplers for diffusion SDEs, offering significant reductions in sampling costs compared to traditional approaches that entail discretizing diffusion SDEs. Furthermore, within the domain of Flow Matching [64] and Rectified Flow [68], both SMLD and DDPMs emerge as specialized instances under distinct paths of the Probability Flow ODE framework [91], with broad applications in vision [22,28,49], depth [37], human motion [47], even language [46]. These models typically utilize velocity field parameterizations employing the linear interpolant, a concept that finds broader applications in the Stochastic Interpolant framework [3], with subsequent generalizations extending to manifold settings [14]. The SiT model [74] scrutinizes the interplay between interpolation methods in both sampling and training contexts, albeit in the context of smaller resolutions such as $512 \times 512$ . Our research endeavors to extend these insights to a larger scale, focusing on the generalization capabilities for 2D images of $1024 \times 1024$ and 3D video data. + +# 3 Method + +In this section, we begin by providing background information on State-Space Models [34,35,39], with a particular focus on a special case known as Mamba [33]. We then highlight the critical issue of Spatial Continuity within the Mamba framework, and based on this insight, we propose the Zigzag Mamba. This enhancement aims to improve the efficiency of 2D data modeling by incorporating the continuity inductive bias inherent in 2D data. Furthermore, we design a basic cross-attention block upon Mamba block to achieve text-conditioning. Subsequently, we suggest extending this approach to 3D video data by factorizing the model into spatial and temporal dimensions, thereby facilitating the modeling process. Finally, we introduce the theoretical aspects of stochastic interpolation for training and sampling, which underpin our network architecture. + +# 3.1 Background: State-Space Models + +State Space Models (SSMs) [34, 35, 39] have been proven to handle long-range dependencies theoretically and empirically [36] with linear scaling w.r.t sequence length. In their general form, a linear state space model can be written as follows: + +$$ +x ^ {\prime} (t) = \mathbf {A} (t) x (t) + \mathbf {B} (t) u (t) +$$ + +$$ +y (t) = \mathbf {C} (t) x (t) + \mathbf {D} (t) u (t), +$$ + +mapping a 1-D input sequence $u(t) \in \mathbb{R}$ to a 1-D output sequence $y(t) \in \mathbb{R}$ through an implicit N-D latent state sequence $x(t) \in \mathbb{R}^n$ . Concretely, deep SSMs seek to use stacks of this simple model in a neural sequence modeling architecture, where the parameters $\mathbf{A}, \mathbf{B}, \mathbf{C}$ and $\mathbf{D}$ for each layer can be learned via gradient descent. + +![](images/95129ba59dc1054d299e18bed2f1b04a78fcf32e35ea8a48eb4214547258b996.jpg) +Figure 2: ZigMa. Our backbone is structured in L layers, mirroring the style of DiT [80]. We use the single-scan Mamba block as the primary reasoning module across different patches. To ensure the network is positionally aware, we've designed an arrange-rearrange scheme based on the single-scan Mamba. Different layers follow pairs of unique rearrange operation $\Omega$ and reverse rearrange $\bar{\Omega}$ , optimizing the position-awareness of the method. + +Recently, Mamba [33] largely improved the flexibility of SSMs in Language Modelling by relaxing the time-invariance constraint on SSM parameters, while maintaining computational efficiency. Several studies [70, 123] have been conducted to adapt the use of Mamba from unidimensional language data to multidimensional visual data. While most of these studies try to duplicate the A to facilitate the new (reversed) direction, this approach can lead to additional parameters and an increased memory burden. In this paper, we focus on exploring the scanning scheme of Mamba in diffusion models to efficiently maximize the use of inductive-bias from multi-dimensional visual data with zero parameter and memory burden. + +# 3.2 Diffusion Backbone: Zigzag Mamba + +DiT-Style Network. We opt to use the framework of DiT by AdaLN [80] rather than the skip-layer focused U-ViT structure [9], as DiT has been validated as a + +scalable structure in literature [10, 18, 78]. Additionally, the Hourglass structure with downsampling [76, 85] requires selecting the depth and width based on the complexity of the dataset and task. This requirement limits the flexibility of the solution. Considering the aforementioned points, it informs our Mamba network design depicted in Figure 4. The core component of this design is the Zigzag Scanning, which will be explained in the following paragraph. + +Zigzag Scanning in Mamba. Previous studies [101, 112] have used bidirectional scanning within the SSM framework. This approach has been expanded to include additional scanning directions [67, 70, 115] to account for the characteristics of 2D image data. These approaches unfold image patches along four directions, resulting in four distinct sequences. Each of these sequences is subsequently processed together through every SSM. However, since each direction may have different SSM parameters (A, B, C, and D), scaling up the number of directions could potentially lead to memory issues. In this work, we investigate the potential for amortizing the complexity of the Mamba into each layer of the network. + +Our approach centers around the concept of token rearrangement before feeding them into the Forward Scan block. For a given input feature $\mathbf{z}_i$ from layer $i$ , the output feature $\mathbf{z}_{i + 1}$ of the Forward Scan block after the rearrangement can be expressed as: + +$$ +\mathbf {z} _ {\Omega_ {i}} = \operatorname {a r r a n g e} \left(\mathbf {z} _ {i}, \Omega_ {i}\right), \tag {1} +$$ + +$$ +\bar {\mathbf {z}} _ {\Omega_ {i}} = \operatorname {s c a n} \left(\mathbf {z} _ {\Omega_ {i}}\right), \tag {2} +$$ + +$$ +\mathbf {z} _ {i + 1} = \operatorname {a r r a n g e} \left(\bar {\mathbf {z}} _ {\Omega_ {i}}, \bar {\Omega} _ {i}\right), \tag {3} +$$ + +$\varOmega_{i}$ represents the 1D permutation of layer $i$ , which rearranges the order of the patch tokens by $\varOmega_{i}$ , and $\varOmega_{i}$ and $\overline{\varOmega}_{i}$ represent the reverse operation. This ensures that both $\mathbf{z}_i$ and $\mathbf{z}_{i + 1}$ maintain the sample order of the original image tokens. + +![](images/68ff9f5378c491864ca7ac38a50b0592af57a87b66ddb62ea438947a29b71cf3.jpg) +(a) sweep-scan + +![](images/a8358b08a142e0575512c0d7f81c41f7bbbbdbc25746dbc00321c40215e479d8.jpg) +(b) zigzag-scan + +![](images/b7bc70bf410c52cc5a982594131aa744e0549d0297aa8ce0f55f0e01de0f46b7.jpg) +Figure 3: The 2D Image Scan. Our mamba scan design is based on the sweep-scan scheme shown in subfigure (a). From this, we developed a zigzag-scan scheme displayed in subfigure (b) to enhance the continuity of the patches, thereby maximizing the potential of the Mamba block. Since there are several possible arrangements for these continuous scans, we have listed the eight most common zigzag-scans in subfigure (c). + +![](images/7c2665c6dc21713e769f8702090a1137b101d214de89f93f081122e32e9df29e.jpg) +(c) zigzag-scan with 8 schemes + +![](images/3ed27162bc2b6fd64e0502efd46623912b5fe90858eac34dd766c386006161c0.jpg) + +![](images/40ca7ad38f7491b0c7ed4dbe70d27303b4c28ebad0119dceed97432837c25ae0.jpg) + +Now we explore the design of the $\Omega_{i}$ operation, considering additional inductive biases from 2D images. We propose one key properties: Spatial Con + +tinuity. Regarding Spatial Continuity, current innovations of Mamba in images [67, 70, 123] often squeeze 2D patch tokens directly following the computer hierarchy, such as row-and-column-major order. However, this approach may not be optimal for incorporating the inductive bias with neighboring tokens, as illustrated in Figure 3. To address this, we introduce a novel scanning scheme designed to maintain spatial continuity during the scan process. Additionally, we consider space-filling, which entails that for a patch of size $N \times N$ , the length of the 1D continuous scanning scheme should be $N^2$ . This helps to efficiently incorporate tokens to maximize the potential of long sequence modeling within the Mamba block. + +Heterogeneous Layerwise Scan. To achieve the aforementioned property, we heuristically design eight possible space-filling continuous schemes $^1$ , denoted as $\mathbf{S}_j$ (where $j \in [0,7]$ ), as illustrated in Figure 3. While there may be other conceivable schemes, for simplicity, we limit our usage to these eight. Consequently, the scheme for each layer can be represented as $\varOmega_{i} = \mathbf{S}_{\{i\% 8\}}$ , where $\%$ denotes the modulo operator. + +![](images/992b39739328f0a020ff69bdedff2e60e393a6bf1ae30d78bfdf9f6dbd2ecb16.jpg) +Figure 4: The Detail of our Zigzag Mamba block. The detail of Mamba Scan is shown in Figure 2. The condition can include a timestep and a text prompt. These are fed into an MLP, which separately modulates the Mamba scan for long sequence modeling and cross-attention for multi-modal reasoning. + +Deploying text-condition on Zigzag Mamba. While Mamba offers the advantage of efficient long sequence modeling, it does so at the expense of the attention mechanism. As a result, there has been limited exploration into incorporating text-conditioning in Mamba-based diffusion models. To address this + +gap, we propose a straightforward cross-attention block with skip layers built upon the Mamba block, as illustrated in Figure 4. This design not only enables long sequence modeling but also facilitates multi-token conditioning, such as text-conditioning. Furthermore, it has the potential to provide interpretability [16, 42, 94], as cross-attention has been utilized in diffusion models. + +Generalizing to 3D videos by factorizing spatial and temporal information. In previous sections, our focus has been on the spatial 2D Mamba, where we designed several spatially continuous, space-filling 2D scanning schemes. In this section, we aim to leverage this experience to aid in designing corresponding mechanisms for 3D video processing. We commence our design process by extrapolating from the conventional directional Mamba, as depicted in Figure 5. Given a video feature input $\mathbf{z} \in \mathbb{R}^{B \times T \times C \times W \times H}$ , we propose three variants of the Video Mamba Block for facilitating 3D video generation. + +(a) Sweep-scan: In this approach, we directly flatten the 3D feature $\mathbf{z}$ without considering spatial or temporal continuity. It's worth noting that the flattening process follows the computer hierarchy order, meaning that no continuity is preserved in the flattened representation. +(b) 3D Zigzag: Compared with the formulation of the 2D zigzag in previous subsections, we follow the similar design to generalize it to 3D Zigzag to keep the continuity in 2D and 3D simultaneously. Potentially, the scheme has much more complexity. We heuristically list 8 schemes as well. However, we empirically find that this scheme will lead to suboptimal optimization. +(c) Factorized 3D Zigzag = 2D Zigzag + 1D Sweep: To address the suboptimal optimization issue, we propose to factorize the spatial and temporal correlations as separate Mamba blocks. The order of their application can be adjusted as desired, for example, "sstt" or "ststst", where "s" represents the spatial-zigzag Mamba and "t" represents the temporal-zigzag Mamba. For a 1D temporal sweep, we simply opt for forward and backward scanning, since there is only one dimension on the time axis. + +Computation Analysis. For a visual sequence $\mathbf{T} \in \mathbb{R}^{1 \times M \times D}$ , the computation complexity of global self-attention and $k$ -direction mamba and our zigzag mamba are as follows: + +$$ +\zeta (\text {s e l f - a t t e n t i o n}) = 4 \mathrm {M D} ^ {2} + 2 \mathrm {M} ^ {2} \mathrm {D}, \tag {4} +$$ + +$$ +\zeta (\mathrm {k} - \text {m a m b a}) = k \times [ 3 \mathrm {M} (2 \mathrm {D}) \mathrm {N} + \mathrm {M} (2 \mathrm {D}) \mathrm {N} ^ {2} ], \tag {5} +$$ + +$$ +\zeta (\text {z i g z a g}) = 3 \mathrm {M} (2 \mathrm {D}) \mathrm {N} + \mathrm {M} (2 \mathrm {D}) \mathrm {N} ^ {2}, \tag {6} +$$ + +where self-attention exhibits quadratic complexity with respect to sequence length M, while Mamba exhibits linear complexity (N is a fixed parameter, set to 16 by default). Here, $k$ represents the number of scan directions in a single Mamba block. Therefore, $k$ -mamba and zigzag share linear complexity with respect to self-attention. Moreover, our zigzag method can eliminate the $k$ series, further reducing the overall complexity. + +![](images/345ced1a59da24d163dcef484064ca2ed5ecf182938801a4b72ab06f35b5075a.jpg) +Figure 5: The 3D Video Scan. (a) We illustrate the bidirectional Mamba with the sweep scan, where the spatial and temporal information is treated as a set of tokens with a computer-hierarchy order. (b) For the 3D zigzag-scan, we aim to maximize the potential of Mamba by employing a spatial continuous scan scheme and adopting the optimal zigzag scan solution, as depicted in Figure 3. (c) We further separate the reasoning between spatial and temporal information, resulting in a factorized combination of 2D spatial scan $(\varOmega)$ plus a 1D temporal scan $(\varOmega^{\prime})$ scheme. + +Upon completing the design of the Zigzag Mamba network for improved visual inductive-bias integration, we proceed to combine it with a new diffusion framework, as illustrated below. + +# 3.3 Diffusion Framework: Stochastic Interpolant + +Sampling based on vector $\mathbf{v}$ and score $\mathbf{s}$ . Following [3, 96], the time-dependent probability distribution $p_t(\mathbf{x})$ of $\mathbf{x}_t$ also coincides with the distribution of the reverse-time SDE [6]: + +$$ +d \mathbf {X} _ {t} = \mathbf {v} \left(\mathbf {X} _ {t}, t\right) d t + \frac {1}{2} w _ {t} \mathbf {s} \left(\mathbf {X} _ {t}, t\right) d t + \sqrt {w _ {t}} d \bar {\mathbf {W}} _ {t}, \tag {7} +$$ + +where $\bar{\mathbf{W}}_t$ is a reverse-time Wiener process, $w_{t} > 0$ is an arbitrary time-dependent diffusion coefficient, $\mathbf{s}(\mathbf{x},t) = \nabla \log p_t(\mathbf{x})$ is the score, and $\mathbf{v}(\mathbf{x},t)$ is given by the conditional expectation + +$$ +\begin{array}{l} \mathbf {v} (\mathbf {x}, t) = \mathbb {E} [ \dot {\mathbf {x}} _ {t} | \mathbf {x} _ {t} = \mathbf {x} ], \\ \begin{array}{l} \underline {{- [ - t ] = - t}} \\ = \dot {\alpha} _ {t} \mathbb {E} \left[ \mathbf {x} _ {*} \mid \mathbf {x} _ {t} = \mathbf {x} \right] + \dot {\sigma} _ {t} \mathbb {E} \left[ \boldsymbol {\varepsilon} \mid \mathbf {x} _ {t} = \mathbf {x} \right], \end{array} \tag {8} \\ \end{array} +$$ + +where $\alpha_{t}$ is a decreasing function of $t$ , and $\sigma_{t}$ is an increasing function of $t$ . Here, $\dot{\alpha}_{t}$ and $\dot{\sigma}_{t}$ denote the time derivatives of $\alpha_{t}$ and $\sigma_{t}$ , respectively. + +As long as we can estimate the velocity $\mathbf{v}(\mathbf{x},t)$ and/or score $\mathbf{s}(\mathbf{x},t)$ fields, we can utilize it for the sampling process either by probability flow ODE [91] or the reverse-time SDE (7). Solving the reverse SDE (7) backwards in time from $\mathbf{X}_T = \varepsilon \sim \mathcal{N}(0,\mathbf{I})$ enables generating samples from the approximated data distribution $p_0(\mathbf{x})\sim p(\mathbf{x})$ . During sampling, we can perform direct sampling + +from either ODE or SDEs to balance between sampling speed and fidelity. If we choose to conduct ODE sampling, we can achieve this simply by setting the noise term $\mathbf{s}$ to zero. + +In [3], it shows that one of the two quantities $\mathbf{s}_{\theta}(\mathbf{x},t)$ and $\mathbf{v}_{\theta}(\mathbf{x},t)$ needs to be estimated in practice. This follows directly from the constraint + +$$ +\begin{array}{l} \mathbf {x} = \mathbb {E} \left[ \mathbf {x} _ {t} \mid \mathbf {x} _ {t} = \mathbf {x} \right], \tag {9} \\ = \alpha_ {t} \mathbb {E} [ \mathbf {x} _ {*} | \mathbf {x} _ {t} = \mathbf {x} ] + \sigma_ {t} \mathbb {E} [ \varepsilon | \mathbf {x} _ {t} = \mathbf {x} ], \\ \end{array} +$$ + +which can be used to re-express the score $\mathbf{s}(\mathbf{x},t)$ in terms of the velocity $\mathbf{v}(\mathbf{x},t)$ as + +$$ +\mathbf {s} (\mathbf {x}, t) = \sigma_ {t} ^ {- 1} \frac {\alpha_ {t} \mathbf {v} (\mathbf {x} , t) - \dot {\alpha} _ {t} \mathbf {x}}{\dot {\alpha} _ {t} \sigma_ {t} - \alpha_ {t} \dot {\sigma} _ {t}}. \tag {10} +$$ + +Thus, $\mathbf{v}(\mathbf{x},t)$ and $\mathbf{s}(\mathbf{x},t)$ can be mutually conversed. We illustrate how to compute them in the following. + +Estimating the score $\mathbf{s}$ and the velocity $\mathbf{v}$ . It has been shown in score-based diffusion models [91] that the score can be estimated parametrically as $\mathbf{s}_{\theta}(\mathbf{x},t)$ using the loss + +$$ +\mathcal {L} _ {\mathrm {s}} (\theta) = \int_ {0} ^ {T} \mathbb {E} [ \| \sigma_ {t} \mathbf {s} _ {\theta} (\mathbf {x} _ {t}, t) + \varepsilon \| ^ {2} ] \mathrm {d} t. \tag {11} +$$ + +Similarly, the velocity $\mathbf{v}(\mathbf{x},t)$ can be estimated parametrically as $\mathbf{v}_{\theta}(\mathbf{x},t)$ via the loss + +$$ +\mathcal {L} _ {\mathrm {v}} (\theta) = \int_ {0} ^ {T} \mathbb {E} [ \| \mathbf {v} _ {\theta} (\mathbf {x} _ {t}, t) - \dot {\alpha} _ {t} \mathbf {x} _ {*} - \dot {\sigma} _ {t} \boldsymbol {\varepsilon} \| ^ {2} ] \mathrm {d} t, \tag {12} +$$ + +where $\theta$ represents the Zigzag Mamba network that we described in the previous section, we adopt the linear path for training, due to its simplicity and relatively straight trajectory: + +$$ +\alpha_ {t} = 1 - t, \quad \sigma_ {t} = t. \tag {13} +$$ + +We note that any time-dependent weight can be included under the integrals in both (11) and (12). These weight factors play a crucial role in score-based models when $T$ becomes large [54, 55]. Thus, they provide a general form that considers both the time-dependent weight and the stochasticity. + +# 4 Experiment + +# 4.1 Dataset and Training Detail + +Image Dataset. To explore the scalability in high resolution, we conduct experiments on the FacesHQ $1024 \times 1024$ . The general dataset that we use for training and ablations is FacesHQ, a compilation of CelebA-HQ [110] and FFHQ [53], as employed in previous work such as [26, 28]. + +Table 1: Ablation of Scanning Scheme Number. We evaluate various zigzag scanning schemes. Starting from a simple "Sweep" baseline, we consistently observe improvements as more schemes are implemented. + +
MultiModal-CelebA-256MultiModal-CelebA-512
FID5k ↓FDD5k ↓KID5k ↓FID5k ↓FDD5k ↓KID5k ↓
Sweep158.175.90.169162.3103.20.203
Zigzag-165.747.80.051121.078.00.113
Zigzag-254.745.50.04196.059.50.079
Zigzag-845.526.40.01134.929.50.023
+ +Video Dataset. UCF101 dataset consists of 13,320 video clips, which are classified into 101 categories. The total length of these video clips is over 27 hours. All these videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of $320 \times 240$ . We randomly sample continuous 16 frames and resize the frames to $256 \times 256$ . + +Training Details. We uniformly use AdamW [72] optimizer with $1e - 4$ learning rate. For extracting latent features, we employ off-the-shelf VAE encoders. To mitigate computational costs, we adopted a mixed-precision training approach. Additionally, we applied gradient clipping with a threshold of 2.0 and a weight decay of 0.01 to prevent NaN occurrences during Mamba training. Most of our experiments were conducted on 4 A100 GPUs, with scalability exploration extended to 16 and 32 A100 GPUs. For sampling, we adopt the ODE sampling for speed consideration. For further details, please refer to the Appendix 8.8. + +# 4.2 Ablation Study + +Table 2: Ablation about Position Embedding (PE) on unconditional CelebA dataset $(256^{2})$ . To better abate PE and eliminate the conditional signal's influence, we use an unconditional dataset. + +
FID/FDD ↓No PECosine PELearnable PE
VisionMamba [123]21.33/21.0018.47/19.9016.38/18.20
ZigMa14.27/18.0014.04/17.9113.32/17.40
+ +Scan Scheme Ablation. We provide several important findings based on our ablation studies on MultiModal-CelebA dataset in various resolutions in Table 1. Firstly, switching the scanning scheme from sweep to zigzag led to some gains. Secondly, as we increased the zigzag scheme from 1 to 8, we saw consistent gains. This indicates that alternating the scanning scheme in various blocks can be beneficial. Finally, the relative gain between Zigzag-1 and Zigzag-8 is more prominent at higher resolutions ( $512 \times 512$ , or longer sequence token number) + +![](images/53bc43796d12c2f7e7e06e43eb902b0f63f951f7460839d0059c4e0db032d056.jpg) +(a) FPS v.s. Patch Number. + +![](images/6d89946986831453e9d1a17ba75193683e812b5ef23bc2269e03b91b2d2a4f77.jpg) +(b) GPU Memory v.s. Patch Number. + +![](images/c8c1fbb9e3be3d50e53a759e858d96d168a1a691796565422b0a1d96a507a810.jpg) +(c) Order Receptive Field v.s. GPU Memory. + +![](images/748520714e1a920464774140e40b899cb47a9374c53f83117e91600e5bb580e3.jpg) +(d) Order Receptive Field v.s. FPS. +Figure 6: (a, b).GPU Memory usage and FPS between our method and transformer-based methods(U-VIT [9] and DiT [80]). (c). Order Receptive Field and GPU memory (d). Order Receptive Field and FPS. Order Receptive Field denotes how many scan paths we consider in our network design. + +compared to lower resolutions ( $256 \times 256$ , or shorter sequence token number), this shows the great potential and more efficient inductive-bias incorporation in longer sequence number. + +Ablation about Position Embedding. As shown in Table 2, the learnable embedding performs better than the Sinusoidal embedding, which in turn performs better than no position embedding. In various cases, our zigzag method surpasses the baselines. Notably, our performance remains almost unchanged whether we use the Sinusoidal position embedding or no position embedding. This suggests that our method can better incorporate spatial inductive-bias compared to our baseline. Finally, using the learnable position embedding provides further, albeit marginal, gains suggesting that better position embedding exists even within our zigzag scan scheme. We find that [79] shares the same conclusion as us in video-related tasks. + +Ablation study about the Network and FPS/GPU-Memory. In Figure 6 (a,b), we analyze the forward speed and GPU memory usage while varying the global patch dimensions from $32 \times 32$ to $196 \times 196$ . For the speed analysis, we report Frame Per Second (FPS) instead of FLOPS, as FPS provides a more explicit and appropriate evaluation of speed2. For simplicity, we uniformly apply the zigzag-1 Mamba scan scheme and use batch size=1 and patch size=1 on an A100 GPU with 80GB memory. It's worth noting that all methods share nearly identical parameter numbers for fair comparison. We primarily compare our method with two popular transformer-based Diffusion backbones, U-ViT [9] and DiT [80]. It is evident that our method achieves the best FPS and GPU + +utilization when gradually increasing the patching number. U-ViT demonstrates the worst performance, even exceeds the memory bounds when the patch number is 196. Surprisingly, DiT's GPU utilization is close to our method, which supports our backbone choice of DiT from a practical perspective. + +Table 3: Main result on FacesHQ-1024 dataset with 4,094 tokens in latent space and $\mathbf{bs} = \mathbf{512}$ . Our method can outperform the baseline and can achieve even better results when the training scale is increased. + +
MethodFID5k↓FDD5k↓
VisionMamba [123]51.166.3
ZigMa37.850.5
ZigMa bs × 226.631.2
+ +Table 5: Transformer-based methods comparison on unconditional CelebA256. + +
MethodFID↓Memory(G) ↓FLOPS(G) ↓
U-ViT14.5035.1012.5
DiT14.6429.205.5
ZigMa14.2717.805.2
+ +Table 4: Main Results on MS-COCO dataset with $\mathrm{bs} = {256}$ . Our method consistently outperforms the baseline. ZigMa with 8 scans performs much better compared with the baseline. + +
MethodFID5k↓
Sweep195.1
Zigzag-173.1
VisionMamba [123]60.2
Zigzag-841.8
+ +Table 6: Video Scan Scheme on UCF101 dataset with $\mathrm{bs} = {32}$ . + +
MethodFrame-FID5k↓FVD5k↓
Bidirection [123]256.1320.2
3D Zigzag238.1282.3
Our216.1210.2
Bidirection [123] bs×4146.2201.1
ZigMa bs×4121.2140.1
+ +Order Receptive Field. We propose a new concept in Mamba-based structure for multidimensional data. Given that various spatially-continuous zigzag paths may exist in multidimensional data, we introduce the term Order Receptive Field which denotes the number of zigzag paths explicitly employed in the network design. + +Ablation study about the Order Receptive Field and FPS/GPU-Memory. As depicted in Fig. 6 (c,d), Zigzag Mamba consistently maintains its GPU memory consumption and FPS rate, even with a gradually increasing Order Receptive Field. In contrast, our primary baseline, Parallel Mamba, along with variants like Bidirectional Mamba and Vision Mamba [70, 123], experience a consistent decrease in FPS due to increased parameters. Notably, Zigzag Mamba, with an Order Receptive Field of 8, can perform faster without altering parameters. + +Comparison with transformer-based methods. We show the result in Table 5 on unconditional generation task. Our method achieves performance comparable to Transformer-based methods, with significantly less memory consumption and fewer FLOPS. + +# 4.3 Main Result + +Main Result on $1024 \times 1024$ FacesHQ. To elaborate on the scalability of our method within the Mamba and Stochastic Interpolant framework, we provide comparisons on a high-resolution dataset ( $1024 \times 1024$ FacesHQ) in Table 3. Our primary comparison is against Bidirectional Mamba, a commonly used solution for applying Mamba to 2D image data [70, 123]. With the aim of investigating Mamba's scalability in large resolutions up to 1,024, we employ the diffusion model on the latent space of $128 \times 128$ with a patch size of 2, resulting in 4,096 tokens. The network is trained on 16 A100 GPUs. Notably, our method demonstrates superior results compared to Bidirectional Mamba. Details regarding loss, FID curves, and visualization can be found in the Appendix. While constrained by GPU resource limitations, preventing longer training duration, we anticipate consistent outperformance of Bidirectional Mamba with extended training duration. + +COCO dataset. To further compare the performance of our method, we also evaluate it on the more complex and common dataset MS COCO. We compare with the Bidirection Mamba as the baseline in Table 4. It should be noted that all methods share nearly identical parameter numbers for fair comparison. We trained all methods using 16 A100 GPUs. please check Appendix 8.8 for details. As depicted in Table 4, our Zigzag-8 method outperforms Bidirectional Mamba as well as Zigzag-1. This suggests that amortizing various scanning schemes can yield significant improvements, attributed to better incorporation of the inductive bias for 2D images in Mamba. + +UCF101 dataset. In Table 6, we present our results on the UCF101 dataset, training all methods using 4 A100 GPUs, with further scalability exploration conducted using 16 A100 GPUs. We mainly compare our method consistently with Vision Mamba [123]. For the choice of the 3D Zigzag Mamba, please refer to Appendix 8.8. For Factorized 3D Zigzag Mamba in video processing, we deploy the sst scheme for factorizing spatial and temporal modeling. This scheme prioritizes spatial information complexity over temporal information, hypothesizing that redundancy exists in the temporal domain. Our results consistently demonstrate the superior performance of our method across various scenarios, underscoring the intricacy and effectiveness of our approach. + +# 5 Conclusion + +In this paper, we present the Zigzag Mamba Diffusion Model, developed within the Stochastic Interpolant framework. Our initial focus is on addressing the critical issue of spatial continuity. We then devise a Zigzag Mamba block with heterogeneous layerwise scan to better utilize the inductive bias in 2D images. Further, we factorize the 3D Mamba into 2D and 1D Zigzag Mamba to facilitate optimization. We empirically design various ablation studies to examine different factors. This approach allows for a more in-depth exploration of the Stochastic Interpolant theory. We hope our endeavor can inspire further exploration in the Mamba network design. + +# Acknowledgements + +This project has been supported by the German Federal Ministry for Economic Affairs and Climate Action within the project "NXT GEN AI METHODS - Generative Methoden für Perzeption, Prädiktion und Planung", the bidt project KLIMA-MEMES, Bayer AG, and the German Research Foundation (DFG) project 421703927. The authors gratefully acknowledge the Gauss Center for Supercomputing for providing compute through the NIC on JUWELS at JSC and the HPC resources supplied by the Erlangen National High Performance Computing Center (NHR@FAU funded by DFG). + +# References + +1. Agarwal, N., Suo, D., Chen, X., Hazan, E.: Spectral state space models. arXiv (2023) 28 +2. Ahamed, M.A., Cheng, Q.: Mambatab: A simple yet effective approach for handling tabular data. arXiv (2024) 3, 28 +3. Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E.: Stochastic interpolants: A unifying framework for flows and diffusions. arXiv (2023) 2, 4, 9, 10 +4. Albergo, M.S., Vanden-Eijnden, E.: Building normalizing flows with stochastic interpolants. arXiv (2022) 2 +5. Ali, A., Zimerman, I., Wolf, L.: The hidden attention of mamba models. arXiv (2024) 28 +6. Anderson, B.D.: Reverse-time diffusion equation models. Stochastic Processes and their Applications (1982) 9 +7. Anthony, Q., Tokpanov, Y., Glorioso, P., Millidge, B.: Blackmamba: Mixture of experts for state-space models. arXiv (2024) 28 +8. Ao, S., Zhao, W., Han, X., Yang, C., Liu, Z., Shi, C., Sun, M., Wang, S., Su, T.: Burstattention: An efficient distributed attention framework for extremely long sequences. arXiv (2024) 2 +9. Bao, F., Li, C., Cao, Y., Zhu, J.: All are worth words: a vit backbone for score-based diffusion models. CVPR (2023) 1, 3, 5, 12, 23 +10. Bao, F., Nie, S., Xue, K., Li, C., Pu, S., Wang, Y., Yue, G., Cao, Y., Su, H., Zhu, J.: One transformer fits all distributions in multi-modal diffusion at scale. arXiv (2023) 1, 3, 6 +11. Beck, M., Poppel, K., Spanring, M., Auer, A., Prudnikova, O., Kopp, M., Klambauer, G., Brandstetter, J., Hochreiter, S.: xlstm: Extended long short-term memory (2024) 22 +12. Behrouz, A., Hashemi, F.: Graph mamba: Towards learning on graphs with state space models. arXiv (2024) 3, 28 +13. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv (2020) 1, 3 +14. Ben-Hamu, H., Cohen, S., Bose, J., Amos, B., Grover, A., Nickel, M., Chen, R.T., Lipman, Y.: Matching normalizing flows and probability paths on manifolds. In: ICML (2022) 4 +15. Brandon, W., Nrusimha, A., Qian, K., Ankner, Z., Jin, T., Song, Z., Ragan-Kelley, J.: Striped attention: Faster ring attention for causal transformers. arXiv preprint arXiv:2311.09431 (2023) 2 + +16. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: CVPR (2021) 8 +17. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. NeurIPS (2018) 2 +18. Chen, S., Xu, M., Ren, J., Cong, Y., He, S., Xie, Y., Sinha, A., Luo, P., Xiang, T., Perez-Rua, J.M.: Gentron: Delving deep into diffusion transformers for image and video generation. arXiv (2023) 3, 6 +19. Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv (2019) 1 +20. Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al.: Rethinking attention with performers. arXiv (2020) 2 +21. Crowson, K., Baumann, S.A., Birch, A., Abraham, T.M., Kaplan, D.Z., Shippole, E.: Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers. arXiv (2024) 29 +22. Dao, Q., Phung, H., Nguyen, B., Tran, A.: Flow matching in latent space. arXiv (2023) 4 +23. Dao, T., Fu, D., Ermon, S., Rudra, A., Ré, C.: Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS (2022) 2, 3 +24. Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A.P., Caron, M., Geirhos, R., Alabdulmohsin, I., et al.: Scaling vision transformers to 22 billion parameters. In: ICML (2023) 3 +25. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021) 23, 27 +26. Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: CVPR (2021) 10 +27. Fei, Z., Fan, M., Yu, C., Huang, J.: Scalable diffusion models with state space backbone. arXiv (2024) 3, 4, 28 +28. Fischer, J.S., Gui, M., Ma, P., Stracke, N., Baumann, S.A., Ommer, B.: Boosting latent diffusion with flow matching. ECCV (2024) 4, 10 +29. Fu, D.Y., Dao, T., Saab, K.K., Thomas, A.W., Rudra, A., Ré, C.: Hungry hungry hippos: Towards language modeling with state space models. arXiv (2022) 2 +30. Fuest, M., Ma, P., Gui, M., Fischer, J.S., Hu, V.T., Ommer, B.: Diffusion models and representation learning: A survey. arXiv preprint arXiv:2407.00783 (2024) 1 +31. Gong, H., Kang, L., Wang, Y., Wan, X., Li, H.: nnmamba: 3d biomedical image segmentation, classification and landmark detection with state space model. arXiv (2024) 28 +32. Gong, J., Foo, L.G., Fan, Z., Ke, Q., Rahmani, H., Liu, J.: Diffpose: Toward more reliable 3d pose estimation. In: CVPR (2023) 1 +33. Gu, A., Dao, T.: Mamba: Linear-time sequence modeling with selective state spaces. CoLM (2024) 2, 3, 4, 5 +34. Gu, A., Goel, K., Gupta, A., Ré, C.: On the parameterization and initialization of diagonal state space models. NeurIPS (2022) 2, 4, 5 +35. Gu, A., Goel, K., Ré, C.: Efficiently modeling long sequences with structured state spaces (2021) 2, 4, 5 +36. Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., Ré, C.: Combining recurrent, convolutional, and continuous-time models with linear state space layers. NeurIPS (2021) 2, 5 + +37. Gui, M., Fischer, J.S., Prestel, U., Ma, P., Kotovenko, D., Grebenkova, O., Baumann, S.A., Hu, V.T., Ommer, B.: Depthfm: Fast monocular depth estimation with flow matching. arXiv preprint arXiv:2403.13788 (2024) 4 +38. Guo, H., Li, J., Dai, T., Ouyang, Z., Ren, X., Xia, S.T.: Mambair: A simple baseline for image restoration with state-space model. arXiv (2024) 3, 28 +39. Gupta, A., Gu, A., Berant, J.: Diagonal state spaces are as effective as structured state spaces. NeurIPS (2022) 2, 4, 5 +40. He, W., Han, K., Tang, Y., Wang, C., Yang, Y., Guo, T., Wang, Y.: Densemamba: State space models with dense hidden connection for efficient large language models. arXiv (2024) 28 +41. He, X., Cao, K., Yan, K., Li, R., Xie, C., Zhang, J., Zhou, M.: Pan-mamba: Effective pan-sharpening with state space model. arXiv (2024) 28 +42. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv (2022) 8 +43. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: NeurIPS (2020) 2, 3, 4 +44. Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., Fleet, D.J.: Video diffusion models. In: ARXIV (2022) 1 +45. Hu, V.T., Chen, Y., Caron, M., Asano, Y.M., Snoek, C.G., Ommer, B.: Guided diffusion from self-supervised diffusion features. In: ARXIV (2023) 1 +46. Hu, V.T., Wu, D., Asano, Y., Mettes, P., Fernando, B., Ommer, B., Snoek, C.: Flow matching for conditional text generation in a few sampling steps pp. 380-392 (2024) 4 +47. Hu, V.T., Yin, W., Ma, P., Chen, Y., Fernando, B., Asano, Y.M., Gavves, E., Mettes, P., Ommer, B., Snoek, C.G.: Motion flow matching for human motion synthesis and editing. In: ARXIV (2023) 4 +48. Hu, V.T., Zhang, D.W., Asano, Y.M., Burghouts, G.J., Snoek, C.G.M.: Self-guided diffusion models. In: CVPR (2023) 1 +49. Hu, V.T., Zhang, D.W., Mettes, P., Tang, M., Zhao, D., Snoek, C.G.: Latent space editing in transformer-based flow matching. In: ICML 2023 Workshop, New Frontiers in Learning, Control, and Dynamical Systems (2023) 4 +50. Huang, Z., Zhou, P., Yan, S., Lin, L.: Scalelong: Towards more stable training of diffusion model via scaling network long skip connection. NeurIPS (2024) 1 +51. Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., Fu, B.: Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650 (2021) 29 +52. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. In: NeurIPS (2022) 4 +53. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019) 10 +54. Kingma, D., Salimans, T., Poole, B., Ho, J.: Variational diffusion models. In: NeurIPS (2021) 10 +55. Kingma, D.P., Gao, R.: Understanding the diffusion objective as a weighted integral of ellb. arXiv (2023) 10 +56. Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: The efficient transformer. arXiv (2020) 1 +57. Lee, S., Kim, B., Ye, J.C.: Minimizing trajectory curvature of ode-based generative models. ICML (2023) 4 +58. Li, K., Li, X., Wang, Y., He, Y., Wang, Y., Wang, L., Qiao, Y.: Videomamba: State space model for efficient video understanding. ECCV (2024) 3 + +59. Li, S., Singh, H., Grover, A.: Mamba-nd: Selective state space modeling for multidimensional data. arXiv (2024) 3, 28, 29 +60. Li, Y., Bornschein, J., Chen, T.: Denoising autoregressive representation learning. arXiv preprint arXiv:2403.05196 (2024) 29 +61. Liang, D., Zhou, X., Wang, X., Zhu, X., Xu, W., Zou, Z., Ye, X., Bai, X.: Pointmamba: A simple state space model for point cloud analysis. arXiv preprint arXiv:2402.10739 (2024) 3, 27, 28 +62. Lin, B., Jiang, W., Chen, P., Zhang, Y., Liu, S., Chen, Y.C.: Mtmamba: Enhancing multi-task dense scene understanding by mamba-based decoders. ECCV (2024) 3 +63. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) 30 +64. Lipman, Y., Chen, R.T., Ben-Hamu, H., Nickel, M., Le, M.: Flow matching for generative modeling. ICLR (2023) 2, 4 +65. Liu, G.H., Chen, T., So, O., Theodorou, E.: Deep generalized schrödinger bridge. NeurIPS (2022) 2 +66. Liu, H., Zaharia, M., Abbeel, P.: Ring attention with blockwise transformers for near-infinite context. arXiv (2023) 2 +67. Liu, J., Yang, H., Zhou, H.Y., Xi, Y., Yu, L., Yu, Y., Liang, Y., Shi, G., Zhang, S., Zheng, H., et al.: Swin-umamba: Mamba-based unet withImagenet-based pretraining. arXiv (2024) 2, 6, 7 +68. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv (2022) 4 +69. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. ICLR (2023) 2 +70. Liu, Y., Tian, Y., Zhao, Y., Yu, H., Xie, L., Wang, Y., Ye, Q., Liu, Y.: Vmamba: Visual state space model. arXiv (2024) 2, 3, 5, 6, 7, 13, 14, 28, 29 +71. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV (2021) 1 +72. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2019) 11 +73. Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv (2024) 2, 3, 28 +74. Ma, N., Goldstein, M., Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E., Xie, S.: Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. arXiv (2024) 2, 4 +75. McKenna, D.M.: Hilbert curves: Outside-in and inside-gone. Mathemaesthetics, Inc (2019) 7, 26 +76. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: ECCV (2016) 6 +77. Nguyen, E., Goel, K., Gu, A., Downs, G., Shah, P., Dao, T., Baccus, S., Ré, C.: S4nd: Modeling images and videos as multidimensional signals with state spaces. NeurIPS (2022) 3, 28, 29 +78. OpenAI: Sora: Creating video from text (2024), https://openai.com/sora 1, 6 +79. Park, J., Kim, H.S., Ko, K., Kim, M., Kim, C.: Videomamba: Spatio-temporal selective state space model. ECCV (2024) 3, 12 +80. Peebles, W., Xie, S.: Scalable diffusion models with transformers. arXiv (2022) 1, 3, 5, 12, 23 + +81. Peng, B., Goldstein, D., Anthony, Q., Albalak, A., Alcaide, E., Biderman, S., Cheah, E., Ferdinan, T., Hou, H., Kazienko, P., et al.: Eagle and finch: Rwkv with matrix-valued states and dynamic recurrence. arXiv preprint arXiv:2404.05892 (2024) 22 +82. Qin, Z., Yang, S., Sun, W., Shen, X., Li, D., Sun, W., Zhong, Y.: Hgrn2: Gated linear rnns with state expansion. arXiv preprint arXiv:2404.07904 (2024) 22 +83. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021) 30 +84. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022) 1, 3, 30 +85. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI (2015) 6 +86. Ruan, J., Xiang, S.: Vm-unet: Vision mamba unet for medical image segmentation. arXiv (2024) 3, 28 +87. Skorokhodov, I., Sotnikov, G., Elhoseiny, M.: Aligning latent and image spaces to connect the unconnectable. In: ICCV (2021) 34 +88. Smith, J.T., Warrington, A., Linderman, S.W.: Simplified state space layers for sequence modeling. arXiv (2022) 2 +89. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: ICML (2015) 2 +90. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. arXiv (2019) 4 +91. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: ICLR (2021) 2, 4, 9, 10 +92. Stein, G., Cresswell, J., Hosseinzadeh, R., Sui, Y., Ross, B., Villecloze, V., Liu, Z., Caterini, A.L., Taylor, E., Loaiza-Ganem, G.: Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. NeurIPS (2023) 29 +93. Sun, Z., Yang, Y., Yoo, S.: Sparse attention with learning to hash. In: ICLR (2021) 2 +94. Tang, R., Liu, L., Pandey, A., Jiang, Z., Yang, G., Kumar, K., Stenetorp, P., Lin, J., Ture, F.: What the daam: Interpreting stable diffusion using cross attention. arXiv (2022) 8 +95. Tikochinski, R., Goldstein, A., Meiri, Y., Hasson, U., Reichart, R.: An incremental large language model for long text processing in the brain (2024) 2 +96. Tong, A., Malkin, N., Fatras, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., Bengio, Y.: Simulation-free schr\'' odinger bridges via score and flow matching. arXiv (2023) 9 +97. Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., Gelly, S.: Fvd: A new metric for video generation. ICLR Workshop (2019) 30 +98. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 27 +99. Wang, C., Tsepa, O., Ma, J., Wang, B.: Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv (2024) 28 +00. Wang, J., Gangavarapu, T., Yan, J.N., Rush, A.M.: Mambabyte: Token-free selective state space model. arXiv (2024) 3, 28 +01. Wang, J., Yan, J.N., Gu, A., Rush, A.M.: Pretraining without attention. arXiv (2022) 6 + +102. Wang, S., Li, Q.: Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization. arXiv (2023) 2, 28 +103. Wang, S., Xue, B.: State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. NeurIPS (2024) 2, 28 +104. Wang, W., Ma, S., Xu, H., Usuyama, N., Ding, J., Poon, H., Wei, F.: When an image is worth 1,024 x 1,024 words: A case study in computational pathology. arXiv (2023) 3 +105. Wang, X., Wang, S., Ding, Y., Li, Y., Wu, W., Rong, Y., Kong, W., Huang, J., Li, S., Yang, H., Wang, Z., Jiang, B., Li, C., Wang, Y., Tian, Y., Tang, J.: State space model for new-generation network alternative to transformers: A survey (2024) 3 +106. Wang, X., Kang, Z., Mu, Y.: Text-controlled motion mamba: Text-instructed temporal grounding of human motion. arXiv preprint arXiv:2404.11375 (2024) 3 +107. Wang, Z., Ma, C.: Semi-mamba-unet: Pixel-level contrastive cross-supervised visual mamba-based unet for semi-supervised medical image segmentation. arXiv (2024) 28 +108. Wang, Z., Zheng, J.Q., Zhang, Y., Cui, G., Li, L.: Mamba-unet: Unet-like pure visual mamba for medical image segmentation. arXiv (2024) 3, 28 +109. Wu, L., Wang, D., Gong, C., Liu, X., Xiong, Y., Ranjan, R., Krishnamoorthi, R., Chandra, V., Liu, Q.: Fast point cloud generation with straight flows. In: CVPR (2023) 1 +110. Xia, W., Yang, Y., Xue, J.H., Wu, B.: Tedigan: Text-guided diverse face image generation and manipulation. In: CVPR (2021) 10, 30 +111. Xing, Z., Ye, T., Yang, Y., Liu, G., Zhu, L.: Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. arXiv (2024) 3, 28 +112. Yan, J.N., Gu, J., Rush, A.M.: Diffusion models without attention. arXiv (2023) 4, 6 +113. Yang, S., Wang, B., Shen, Y., Panda, R., Kim, Y.: Gated linear attention transformers with hardware-efficient training. ICML (2024) 22 +114. Yang, S., Zhang, Y.: Fla: A triton-based library for hardware-efficient implementations of linear attention mechanism (Jan 2024), https://github.com/sustcsonglin/flashlinear-attention_22 +115. Yang, Y., Xing, Z., Zhu, L.: Vivim: a video vision mamba for medical video object segmentation. arXiv (2024) 6 +116. Yu, A., Nigmatov, A., Morozov, D., Mahoney, M.W., Erichson, N.B.: Robustifying state-space models for long sequences via approximate diagonalization. arXiv (2023) 2 +117. Yu, S., Sohn, K., Kim, S., Shin, J.: Video probabilistic diffusion models in projected latent space. In: CVPR (2023) 30 +118. Zhang, T., Li, X., Yuan, H., Ji, S., Yan, S.: Point could mamba: Point cloud learning via state space model. arXiv (2024) 28 +119. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: CVPR (2018) 29 +120. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. ECCV (2024) 3 +121. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. arXiv (2024) 28 +122. Zheng, Z., Wu, C.: U-shaped vision mamba for single image dehazing. arXiv (2024) 3, 28 + +123. Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., Wang, X.: Vision mamba: Efficient visual representation learning with bidirectional state space model. ICML (2024) 2, 3, 5, 7, 11, 13, 14, 28 +124. zhuzilin: Ring flash attention. https://github.com/zhuzilin/ring-flash-attention (2024) 2 \ No newline at end of file diff --git a/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/images.zip b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5cd2c54808e268ceeb055da42765b55f086407f9 --- /dev/null +++ b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb74b5ff6d7cb88896baf2d16b87aac134a912b9e03a5729aa9134c89ea4fbbe +size 424071 diff --git a/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/layout.json b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..980d805717b4e85a9e12658a9ae7da3cd5f8545e --- /dev/null +++ b/2024/ZigMa_ A DiT-style Zigzag Mamba Diffusion Model/layout.json @@ -0,0 +1,12294 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 149, + 183, + 463, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 183, + 463, + 208 + ], + "spans": [ + { + "bbox": [ + 149, + 183, + 463, + 208 + ], + "type": "text", + "content": "Vincent Tao Hu, Stefan Andreas Baumann, Ming Gui, Olga Grebenkova, Pingchuan Ma, Johannes Fischer, and Björn Ommer" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 235, + 217, + 378, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 235, + 217, + 378, + 239 + ], + "spans": [ + { + "bbox": [ + 235, + 217, + 378, + 239 + ], + "type": "text", + "content": "CompVis @ LMU Munich, MCML https://compvis.github.io/zigma/" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "spans": [ + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "type": "text", + "content": "Abstract The diffusion model has long been plagued by scalability and quadratic complexity issues, especially within transformer-based structures. In this study, we aim to leverage the long sequence modeling capability of a State-Space Model called Mamba to extend its applicability to visual data generation. Firstly, we identify a critical oversight in most current Mamba-based vision methods, namely the lack of consideration for spatial continuity in the scan scheme of Mamba. Secondly, building upon this insight, we introduce Zigzag Mamba, a simple, plug-and-play, minimal-parameter burden, DiT style solution, which outperforms Mamba-based baselines and demonstrates improved speed and memory utilization compared to transformer-based baselines, also this heterogeneous layerwise scan enables zero memory and speed burden when we consider more scan paths. Lastly, we integrate Zigzag Mamba with the Stochastic Interpolant framework to investigate the scalability of the model on large-resolution visual datasets, such as FacesHQ " + }, + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "type": "text", + "content": " and UCF101, MultiModal-CelebA-HQ, and MS COCO " + }, + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 160, + 274, + 455, + 449 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "spans": [ + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "type": "text", + "content": "Keywords: Diffusion Model " + }, + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "type": "text", + "content": " State-Space Model " + }, + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 159, + 460, + 453, + 483 + ], + "type": "text", + "content": " Stochastic Interpolants" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 506, + 230, + 517 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 506, + 230, + 517 + ], + "spans": [ + { + "bbox": [ + 132, + 506, + 230, + 517 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 533, + 482, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 482, + 668 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 482, + 668 + ], + "type": "text", + "content": "Diffusion models have demonstrated significant advancements across various applications, including image processing [45, 48, 84], video analysis [44], point cloud processing [109], representation learning [30] and human pose estimation [32]. Many of these models are built upon Latent Diffusion Models (LDM) [84], which are typically based on the UNet backbone. However, scalability remains a significant challenge in LDMs [50]. Recently, transformer-based structures have gained popularity due to their scalability [9, 80] and effectiveness in multi-modal training [10]. Notably, the transformer-based structure DiT [80] has even contributed to enhancing the high-fidelity video generation model SORA [78] by OpenAI. Despite efforts to alleviate the quadratic complexity of the attention mechanism through techniques such as windowing [71], sliding [13], sparsification [19, 56]," + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 135, + 114, + 167, + 144 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 135, + 114, + 167, + 144 + ], + "spans": [ + { + "bbox": [ + 135, + 114, + 167, + 144 + ], + "type": "text", + "content": "#" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 167, + 126, + 479, + 159 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 126, + 479, + 159 + ], + "spans": [ + { + "bbox": [ + 167, + 126, + 479, + 159 + ], + "type": "text", + "content": "ZigMa: A DiT-style Zigzag Mamba Diffusion Model" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "spans": [ + { + "bbox": [ + 130, + 115, + 479, + 139 + ], + "type": "text", + "content": "- hashing [20, 93], Ring Attention [15, 66], Flash Attention [23] or a combination of them [8, 124], it remains a bottleneck for diffusion models." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 355 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 355 + ], + "type": "text", + "content": "On the other hand, State-Space Models [34, 35, 39] have demonstrated significant potential for long sequence modeling, rivaling transformer-based methods. Their biological similarity [95] and efficient memory state also advocate for the use of the State-Space model over the transformer. Several methods [29, 33, 35, 88] have been proposed to enhance the robustness [116], scalability [33], and efficiency [35, 36] of State-Space Models. Among these, a method called Mamba [33] aims to alleviate these issues through work-efficient parallel scanning and other data-dependent innovations. However, the advantage of Mamba lies in 1D sequence modeling, and extending it to 2D images is a challenging question. Previous works [70, 123] have proposed flattening 2D tokens directly by computer hierarchy such as row-and-column-major order, but this approach neglects Spatial Continuity, as shown in Figure 1. Other works [67, 73] consider various directions in a single Mamba block, but this introduces additional parameters and GPU memory burden. In this paper, we aim to emphasize the importance of Spatial Continuity in Mamba and propose several intuitive and simple methods to enable the application of Mamba blocks to 2D images by incorporating continuity-based inductive biases in images. We also generalize these methods to 3D with spatial-temporal factorization on 3D sequence." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "spans": [ + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "text", + "content": "In the end, Stochastic Interpolant [3] provides a more generalized framework that can uniform various generative models including, Normalizing Flow [17], diffusion model [43,89,91], Flow matching [4,64,69], and Schrödinger Bridge [65]. Previously, some works [74] explore the Stochastic Interpolant on relatively small resolutions, e.g., " + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "text", + "content": ". In this work, we aim to explore it in further more complex scenarios e.g., " + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 130, + 356, + 482, + 427 + ], + "type": "text", + "content": " resolution and even in videos." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 427, + 482, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 427, + 482, + 571 + ], + "spans": [ + { + "bbox": [ + 130, + 427, + 482, + 571 + ], + "type": "text", + "content": "In summary, our contributions are as follows: Firstly, we identify the critical issue of Spatial Continuity in generalizing the Mamba block from 1D sequence modeling to 2D image and 3D video modeling. Building on this insight, we propose a simple, plug-and-play, zero-parameter heterogeneous layerwise scan paradigm named Zigzag Mamba (ZigMa) that leverages spatial continuity to maximally incorporate the inductive bias from visual data. Secondly, we extend the methodology from 2D to 3D by factorizing the spatial and temporal sequences to optimize performance. Secondly, we provide comprehensive analysis surrounding the Mamba block within the regime of diffusion models. Lastly, we demonstrate that our designed Zigzag Mamba outperforms related Mamba-based baselines, representing the first exploration of Stochastic Interpolants on large-scale image data " + }, + { + "bbox": [ + 130, + 427, + 482, + 571 + ], + "type": "inline_equation", + "content": "(1024\\times 1024)" + }, + { + "bbox": [ + 130, + 427, + 482, + 571 + ], + "type": "text", + "content": " and videos." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 590, + 243, + 604 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 590, + 243, + 604 + ], + "spans": [ + { + "bbox": [ + 132, + 590, + 243, + 604 + ], + "type": "text", + "content": "2 Related Works" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 617, + 485, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 485, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 485, + 666 + ], + "type": "text", + "content": "Mamba. Several works [102, 103, 103] have demonstrated that the State-Space Model possesses universal approximation ability under certain conditions. Mamba, as a new State-Space Model, has superior potential for modeling long sequences efficiently, which has been explored in various fields such as medical imag-" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 189, + 115, + 428, + 304 + ], + "blocks": [ + { + "bbox": [ + 189, + 115, + 428, + 304 + ], + "lines": [ + { + "bbox": [ + 189, + 115, + 428, + 304 + ], + "spans": [ + { + "bbox": [ + 189, + 115, + 428, + 304 + ], + "type": "image", + "image_path": "cac781c348d71fd43da8f1e4c58e7d32975e0218880efb91f784ab995d41237f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 132, + 314, + 482, + 336 + ], + "lines": [ + { + "bbox": [ + 132, + 314, + 482, + 336 + ], + "spans": [ + { + "bbox": [ + 132, + 314, + 482, + 336 + ], + "type": "text", + "content": "Figure 1: Motivation. Our Zigzag Mamba method improves the network's position-awareness by arranging and rearranging the scan path of Mamba in a heuristic manner." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 364, + 482, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 364, + 482, + 579 + ], + "spans": [ + { + "bbox": [ + 130, + 364, + 482, + 579 + ], + "type": "text", + "content": "ing [73, 86, 108, 111], video [58, 79], image restoration [38, 122], graphs [12], NLP word byte [100], tabular data [2], point clouds [61], human motion [106, 120], multi-task [62] and image generation [27]. Among them, the most related to us are VisionMamba [70, 123], S4ND [77] and Mamba-ND [59]. VisionMamba [70, 123] uses a bidirectional SSM in discriminative tasks which incurs a high computational cost. Our method applies a simple alternative mamba diffusion in generative models. S4ND [77] introduces local convolution into Mamba's reasoning process, moving beyond the use of only 1D data. Mamba-ND [59] takes multi-dimensionality into account in discriminative tasks, making use of various scans within a single block. In contrast, our focus is on distributing scan complexity across every layer of the network, thus maximizing the incorporation of inductive bias from visual data with zero parameter burden. Scan curve is an important direction in SSM, PointMamba [61] is a representative work that employs SSM with space curves (e.g., Hilbert) for point cloud analysis, achieving remarkable performance. In contrast with them, our preliminary results show that the Hilbert curve doesn't work well with our method (see Appendix), while our method can be regarded as the simplest Peano curve. For more information related to Mamba's work, please refer to the survey [105]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 582, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 582, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 582, + 482, + 666 + ], + "type": "text", + "content": "Backbones in Diffusion Models. Diffusion models primarily employ UNet-based [43, 84] and ViT-based [9, 80] backbones. While UNet is known for high memory demands [84], ViT benefits from scalability [18, 24] and multi-modal learning [10]. However, ViT's quadratic complexity limits visual token processing, prompting studies towards mitigating this issue [13, 23, 104]. Our work, inspired by Mamba [33], explores an SSM-based model as a generic diffusion backbone, retaining ViT's modality-agnostic and sequential modeling advantages." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 102 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 481, + 101 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 481, + 101 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 481, + 101 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 115, + 482, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 115, + 482, + 189 + ], + "spans": [ + { + "bbox": [ + 130, + 115, + 482, + 189 + ], + "type": "text", + "content": "Concurrently, DiffSSM [112] concentrates on unconditional and class conditioning within the S4 model [35]. DIS [27] mainly explores the state-space model on a relatively small resolution, which is not the exact focus of our work. Our work significantly differs from theirs as it primarily focuses on the backbone design using the Mamba block and extends it to text conditioning. Furthermore, we apply our method to more complex visual data." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "spans": [ + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "type": "text", + "content": "SDE and ODE in Diffusion models. The realm of Score-based Generative Models encompasses significant contributions from foundational works such as Score Matching with Langevin Dynamics (SMLD) by Song et al. [90], and the advent of Diffusion Models with Denoising Score Matching (DDPMs) proposed by Ho et al. [43]. These methodologies operate within the framework of Stochastic Differential Equations (SDEs), a concept further refined in the research of Song et al. [91]. Recent research strides, as exemplified by Karras et al. [52] and Lee et al. [57], have showcased the efficacy of employing Ordinary Differential Equation (ODE) samplers for diffusion SDEs, offering significant reductions in sampling costs compared to traditional approaches that entail discretizing diffusion SDEs. Furthermore, within the domain of Flow Matching [64] and Rectified Flow [68], both SMLD and DDPMs emerge as specialized instances under distinct paths of the Probability Flow ODE framework [91], with broad applications in vision [22,28,49], depth [37], human motion [47], even language [46]. These models typically utilize velocity field parameterizations employing the linear interpolant, a concept that finds broader applications in the Stochastic Interpolant framework [3], with subsequent generalizations extending to manifold settings [14]. The SiT model [74] scrutinizes the interplay between interpolation methods in both sampling and training contexts, albeit in the context of smaller resolutions such as " + }, + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "type": "text", + "content": ". Our research endeavors to extend these insights to a larger scale, focusing on the generalization capabilities for 2D images of " + }, + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 130, + 191, + 483, + 455 + ], + "type": "text", + "content": " and 3D video data." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 491, + 202, + 503 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 491, + 202, + 503 + ], + "spans": [ + { + "bbox": [ + 132, + 491, + 202, + 503 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 533, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 533, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 533, + 482, + 666 + ], + "type": "text", + "content": "In this section, we begin by providing background information on State-Space Models [34,35,39], with a particular focus on a special case known as Mamba [33]. We then highlight the critical issue of Spatial Continuity within the Mamba framework, and based on this insight, we propose the Zigzag Mamba. This enhancement aims to improve the efficiency of 2D data modeling by incorporating the continuity inductive bias inherent in 2D data. Furthermore, we design a basic cross-attention block upon Mamba block to achieve text-conditioning. Subsequently, we suggest extending this approach to 3D video data by factorizing the model into spatial and temporal dimensions, thereby facilitating the modeling process. Finally, we introduce the theoretical aspects of stochastic interpolation for training and sampling, which underpin our network architecture." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 326, + 128 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 326, + 128 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 326, + 128 + ], + "type": "text", + "content": "3.1 Background: State-Space Models" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 136, + 479, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 136, + 479, + 171 + ], + "spans": [ + { + "bbox": [ + 130, + 136, + 479, + 171 + ], + "type": "text", + "content": "State Space Models (SSMs) [34, 35, 39] have been proven to handle long-range dependencies theoretically and empirically [36] with linear scaling w.r.t sequence length. In their general form, a linear state space model can be written as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 242, + 179, + 365, + 193 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 242, + 179, + 365, + 193 + ], + "spans": [ + { + "bbox": [ + 242, + 179, + 365, + 193 + ], + "type": "interline_equation", + "content": "x ^ {\\prime} (t) = \\mathbf {A} (t) x (t) + \\mathbf {B} (t) u (t)", + "image_path": "94b7acab45a9da1376e48fd44e7cf758f407e157662a23d8111e76431db29e6c.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 247, + 194, + 369, + 209 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 194, + 369, + 209 + ], + "spans": [ + { + "bbox": [ + 247, + 194, + 369, + 209 + ], + "type": "interline_equation", + "content": "y (t) = \\mathbf {C} (t) x (t) + \\mathbf {D} (t) u (t),", + "image_path": "ea72bb2a4beeda4038731e613e555a52501c35ea8d841d19e6fcc541cf23036c.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "spans": [ + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "content": "mapping a 1-D input sequence " + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "inline_equation", + "content": "u(t) \\in \\mathbb{R}" + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "content": " to a 1-D output sequence " + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "inline_equation", + "content": "y(t) \\in \\mathbb{R}" + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "content": " through an implicit N-D latent state sequence " + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "inline_equation", + "content": "x(t) \\in \\mathbb{R}^n" + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "content": ". Concretely, deep SSMs seek to use stacks of this simple model in a neural sequence modeling architecture, where the parameters " + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "inline_equation", + "content": "\\mathbf{A}, \\mathbf{B}, \\mathbf{C}" + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "inline_equation", + "content": "\\mathbf{D}" + }, + { + "bbox": [ + 130, + 217, + 480, + 277 + ], + "type": "text", + "content": " for each layer can be learned via gradient descent." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 167, + 306, + 444, + 380 + ], + "blocks": [ + { + "bbox": [ + 167, + 306, + 444, + 380 + ], + "lines": [ + { + "bbox": [ + 167, + 306, + 444, + 380 + ], + "spans": [ + { + "bbox": [ + 167, + 306, + 444, + 380 + ], + "type": "image", + "image_path": "95129ba59dc1054d299e18bed2f1b04a78fcf32e35ea8a48eb4214547258b996.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "lines": [ + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "spans": [ + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "type": "text", + "content": "Figure 2: ZigMa. Our backbone is structured in L layers, mirroring the style of DiT [80]. We use the single-scan Mamba block as the primary reasoning module across different patches. To ensure the network is positionally aware, we've designed an arrange-rearrange scheme based on the single-scan Mamba. Different layers follow pairs of unique rearrange operation " + }, + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "type": "inline_equation", + "content": "\\Omega" + }, + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "type": "text", + "content": " and reverse rearrange " + }, + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "type": "inline_equation", + "content": "\\bar{\\Omega}" + }, + { + "bbox": [ + 130, + 394, + 479, + 460 + ], + "type": "text", + "content": ", optimizing the position-awareness of the method." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 484, + 480, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 484, + 480, + 604 + ], + "spans": [ + { + "bbox": [ + 130, + 484, + 480, + 604 + ], + "type": "text", + "content": "Recently, Mamba [33] largely improved the flexibility of SSMs in Language Modelling by relaxing the time-invariance constraint on SSM parameters, while maintaining computational efficiency. Several studies [70, 123] have been conducted to adapt the use of Mamba from unidimensional language data to multidimensional visual data. While most of these studies try to duplicate the A to facilitate the new (reversed) direction, this approach can lead to additional parameters and an increased memory burden. In this paper, we focus on exploring the scanning scheme of Mamba in diffusion models to efficiently maximize the use of inductive-bias from multi-dimensional visual data with zero parameter and memory burden." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 131, + 621, + 338, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 621, + 338, + 634 + ], + "spans": [ + { + "bbox": [ + 131, + 621, + 338, + 634 + ], + "type": "text", + "content": "3.2 Diffusion Backbone: Zigzag Mamba" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 641, + 479, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 479, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 479, + 665 + ], + "type": "text", + "content": "DiT-Style Network. We opt to use the framework of DiT by AdaLN [80] rather than the skip-layer focused U-ViT structure [9], as DiT has been validated as a" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 480, + 100 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 187 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 187 + ], + "type": "text", + "content": "scalable structure in literature [10, 18, 78]. Additionally, the Hourglass structure with downsampling [76, 85] requires selecting the depth and width based on the complexity of the dataset and task. This requirement limits the flexibility of the solution. Considering the aforementioned points, it informs our Mamba network design depicted in Figure 4. The core component of this design is the Zigzag Scanning, which will be explained in the following paragraph." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 188, + 482, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 188, + 482, + 307 + ], + "spans": [ + { + "bbox": [ + 130, + 188, + 482, + 307 + ], + "type": "text", + "content": "Zigzag Scanning in Mamba. Previous studies [101, 112] have used bidirectional scanning within the SSM framework. This approach has been expanded to include additional scanning directions [67, 70, 115] to account for the characteristics of 2D image data. These approaches unfold image patches along four directions, resulting in four distinct sequences. Each of these sequences is subsequently processed together through every SSM. However, since each direction may have different SSM parameters (A, B, C, and D), scaling up the number of directions could potentially lead to memory issues. In this work, we investigate the potential for amortizing the complexity of the Mamba into each layer of the network." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "spans": [ + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "text", + "content": "Our approach centers around the concept of token rearrangement before feeding them into the Forward Scan block. For a given input feature " + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_i" + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "text", + "content": " from layer " + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "text", + "content": ", the output feature " + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_{i + 1}" + }, + { + "bbox": [ + 130, + 308, + 481, + 355 + ], + "type": "text", + "content": " of the Forward Scan block after the rearrangement can be expressed as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 257, + 361, + 481, + 374 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 361, + 481, + 374 + ], + "spans": [ + { + "bbox": [ + 257, + 361, + 481, + 374 + ], + "type": "interline_equation", + "content": "\\mathbf {z} _ {\\Omega_ {i}} = \\operatorname {a r r a n g e} \\left(\\mathbf {z} _ {i}, \\Omega_ {i}\\right), \\tag {1}", + "image_path": "b2ea6b996db24314d92a3771d0ea5b84869a67f19a5ff386ceb68483fe5f4b49.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 257, + 376, + 481, + 389 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 376, + 481, + 389 + ], + "spans": [ + { + "bbox": [ + 257, + 376, + 481, + 389 + ], + "type": "interline_equation", + "content": "\\bar {\\mathbf {z}} _ {\\Omega_ {i}} = \\operatorname {s c a n} \\left(\\mathbf {z} _ {\\Omega_ {i}}\\right), \\tag {2}", + "image_path": "4d5a1bffdf23aaf6a2dc9725dcb57585f5e77f1d8c28f955774c77c256d47f6f.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 253, + 391, + 481, + 404 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 253, + 391, + 481, + 404 + ], + "spans": [ + { + "bbox": [ + 253, + 391, + 481, + 404 + ], + "type": "interline_equation", + "content": "\\mathbf {z} _ {i + 1} = \\operatorname {a r r a n g e} \\left(\\bar {\\mathbf {z}} _ {\\Omega_ {i}}, \\bar {\\Omega} _ {i}\\right), \\tag {3}", + "image_path": "3ace9b3aefc4fac09d2ed711fc4c335ccc725244188e38e39eed759853968c96.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "spans": [ + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "\\varOmega_{i}" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": " represents the 1D permutation of layer " + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": ", which rearranges the order of the patch tokens by " + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "\\varOmega_{i}" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "\\varOmega_{i}" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "\\overline{\\varOmega}_{i}" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": " represent the reverse operation. This ensures that both " + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_i" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_{i + 1}" + }, + { + "bbox": [ + 131, + 409, + 481, + 445 + ], + "type": "text", + "content": " maintain the sample order of the original image tokens." + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 154, + 475, + 212, + 533 + ], + "blocks": [ + { + "bbox": [ + 154, + 475, + 212, + 533 + ], + "lines": [ + { + "bbox": [ + 154, + 475, + 212, + 533 + ], + "spans": [ + { + "bbox": [ + 154, + 475, + 212, + 533 + ], + "type": "image", + "image_path": "68ff9f5378c491864ca7ac38a50b0592af57a87b66ddb62ea438947a29b71cf3.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 156, + 534, + 209, + 544 + ], + "lines": [ + { + "bbox": [ + 156, + 534, + 209, + 544 + ], + "spans": [ + { + "bbox": [ + 156, + 534, + 209, + 544 + ], + "type": "text", + "content": "(a) sweep-scan" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 225, + 475, + 284, + 533 + ], + "blocks": [ + { + "bbox": [ + 225, + 475, + 284, + 533 + ], + "lines": [ + { + "bbox": [ + 225, + 475, + 284, + 533 + ], + "spans": [ + { + "bbox": [ + 225, + 475, + 284, + 533 + ], + "type": "image", + "image_path": "a8358b08a142e0575512c0d7f81c41f7bbbbdbc25746dbc00321c40215e479d8.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 227, + 534, + 282, + 544 + ], + "lines": [ + { + "bbox": [ + 227, + 534, + 282, + 544 + ], + "spans": [ + { + "bbox": [ + 227, + 534, + 282, + 544 + ], + "type": "text", + "content": "(b) zigzag-scan" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 291, + 463, + 332, + 544 + ], + "blocks": [ + { + "bbox": [ + 291, + 463, + 332, + 544 + ], + "lines": [ + { + "bbox": [ + 291, + 463, + 332, + 544 + ], + "spans": [ + { + "bbox": [ + 291, + 463, + 332, + 544 + ], + "type": "image", + "image_path": "b7bc70bf410c52cc5a982594131aa744e0549d0297aa8ce0f55f0e01de0f46b7.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 564, + 482, + 620 + ], + "lines": [ + { + "bbox": [ + 130, + 564, + 482, + 620 + ], + "spans": [ + { + "bbox": [ + 130, + 564, + 482, + 620 + ], + "type": "text", + "content": "Figure 3: The 2D Image Scan. Our mamba scan design is based on the sweep-scan scheme shown in subfigure (a). From this, we developed a zigzag-scan scheme displayed in subfigure (b) to enhance the continuity of the patches, thereby maximizing the potential of the Mamba block. Since there are several possible arrangements for these continuous scans, we have listed the eight most common zigzag-scans in subfigure (c)." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 336, + 464, + 376, + 544 + ], + "blocks": [ + { + "bbox": [ + 336, + 464, + 376, + 544 + ], + "lines": [ + { + "bbox": [ + 336, + 464, + 376, + 544 + ], + "spans": [ + { + "bbox": [ + 336, + 464, + 376, + 544 + ], + "type": "image", + "image_path": "7c2665c6dc21713e769f8702090a1137b101d214de89f93f081122e32e9df29e.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 324, + 545, + 434, + 555 + ], + "lines": [ + { + "bbox": [ + 324, + 545, + 434, + 555 + ], + "spans": [ + { + "bbox": [ + 324, + 545, + 434, + 555 + ], + "type": "text", + "content": "(c) zigzag-scan with 8 schemes" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 381, + 464, + 422, + 544 + ], + "blocks": [ + { + "bbox": [ + 381, + 464, + 422, + 544 + ], + "lines": [ + { + "bbox": [ + 381, + 464, + 422, + 544 + ], + "spans": [ + { + "bbox": [ + 381, + 464, + 422, + 544 + ], + "type": "image", + "image_path": "3ed27162bc2b6fd64e0502efd46623912b5fe90858eac34dd766c386006161c0.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 427, + 464, + 467, + 544 + ], + "blocks": [ + { + "bbox": [ + 427, + 464, + 467, + 544 + ], + "lines": [ + { + "bbox": [ + 427, + 464, + 467, + 544 + ], + "spans": [ + { + "bbox": [ + 427, + 464, + 467, + 544 + ], + "type": "image", + "image_path": "40ca7ad38f7491b0c7ed4dbe70d27303b4c28ebad0119dceed97432837c25ae0.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": "Now we explore the design of the " + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "inline_equation", + "content": "\\Omega_{i}" + }, + { + "bbox": [ + 130, + 641, + 481, + 666 + ], + "type": "text", + "content": " operation, considering additional inductive biases from 2D images. We propose one key properties: Spatial Con" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "type": "text", + "content": "tinuity. Regarding Spatial Continuity, current innovations of Mamba in images [67, 70, 123] often squeeze 2D patch tokens directly following the computer hierarchy, such as row-and-column-major order. However, this approach may not be optimal for incorporating the inductive bias with neighboring tokens, as illustrated in Figure 3. To address this, we introduce a novel scanning scheme designed to maintain spatial continuity during the scan process. Additionally, we consider space-filling, which entails that for a patch of size " + }, + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "type": "inline_equation", + "content": "N \\times N" + }, + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "type": "text", + "content": ", the length of the 1D continuous scanning scheme should be " + }, + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "type": "inline_equation", + "content": "N^2" + }, + { + "bbox": [ + 130, + 116, + 479, + 234 + ], + "type": "text", + "content": ". This helps to efficiently incorporate tokens to maximize the potential of long sequence modeling within the Mamba block." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "spans": [ + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "content": "Heterogeneous Layerwise Scan. To achieve the aforementioned property, we heuristically design eight possible space-filling continuous schemes" + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "content": ", denoted as " + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "inline_equation", + "content": "\\mathbf{S}_j" + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "content": " (where " + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "inline_equation", + "content": "j \\in [0,7]" + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "content": "), as illustrated in Figure 3. While there may be other conceivable schemes, for simplicity, we limit our usage to these eight. Consequently, the scheme for each layer can be represented as " + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "inline_equation", + "content": "\\varOmega_{i} = \\mathbf{S}_{\\{i\\% 8\\}}" + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "inline_equation", + "content": "\\%" + }, + { + "bbox": [ + 130, + 236, + 480, + 307 + ], + "type": "text", + "content": " denotes the modulo operator." + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 192, + 327, + 421, + 490 + ], + "blocks": [ + { + "bbox": [ + 192, + 327, + 421, + 490 + ], + "lines": [ + { + "bbox": [ + 192, + 327, + 421, + 490 + ], + "spans": [ + { + "bbox": [ + 192, + 327, + 421, + 490 + ], + "type": "image", + "image_path": "992b39739328f0a020ff69bdedff2e60e393a6bf1ae30d78bfdf9f6dbd2ecb16.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 498, + 479, + 542 + ], + "lines": [ + { + "bbox": [ + 130, + 498, + 479, + 542 + ], + "spans": [ + { + "bbox": [ + 130, + 498, + 479, + 542 + ], + "type": "text", + "content": "Figure 4: The Detail of our Zigzag Mamba block. The detail of Mamba Scan is shown in Figure 2. The condition can include a timestep and a text prompt. These are fed into an MLP, which separately modulates the Mamba scan for long sequence modeling and cross-attention for multi-modal reasoning." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 566, + 479, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 566, + 479, + 613 + ], + "spans": [ + { + "bbox": [ + 130, + 566, + 479, + 613 + ], + "type": "text", + "content": "Deploying text-condition on Zigzag Mamba. While Mamba offers the advantage of efficient long sequence modeling, it does so at the expense of the attention mechanism. As a result, there has been limited exploration into incorporating text-conditioning in Mamba-based diffusion models. To address this" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "spans": [ + { + "bbox": [ + 474, + 91, + 480, + 99 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 133, + 620, + 479, + 664 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 620, + 479, + 664 + ], + "spans": [ + { + "bbox": [ + 133, + 620, + 479, + 664 + ], + "type": "text", + "content": "1 We also experimented with more complex continuous space-filling paths, such as the Hilbert space-filling curve [75]. However, empirical findings indicate that this approach may lead to deteriorated results. For further detailed comparisons, please refer to the Appendix." + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 479, + 176 + ], + "type": "text", + "content": "gap, we propose a straightforward cross-attention block with skip layers built upon the Mamba block, as illustrated in Figure 4. This design not only enables long sequence modeling but also facilitates multi-token conditioning, such as text-conditioning. Furthermore, it has the potential to provide interpretability [16, 42, 94], as cross-attention has been utilized in diffusion models." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 177, + 480, + 273 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 177, + 480, + 273 + ], + "spans": [ + { + "bbox": [ + 130, + 177, + 480, + 273 + ], + "type": "text", + "content": "Generalizing to 3D videos by factorizing spatial and temporal information. In previous sections, our focus has been on the spatial 2D Mamba, where we designed several spatially continuous, space-filling 2D scanning schemes. In this section, we aim to leverage this experience to aid in designing corresponding mechanisms for 3D video processing. We commence our design process by extrapolating from the conventional directional Mamba, as depicted in Figure 5. Given a video feature input " + }, + { + "bbox": [ + 130, + 177, + 480, + 273 + ], + "type": "inline_equation", + "content": "\\mathbf{z} \\in \\mathbb{R}^{B \\times T \\times C \\times W \\times H}" + }, + { + "bbox": [ + 130, + 177, + 480, + 273 + ], + "type": "text", + "content": ", we propose three variants of the Video Mamba Block for facilitating 3D video generation." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 274, + 481, + 469 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 130, + 274, + 481, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 274, + 481, + 323 + ], + "spans": [ + { + "bbox": [ + 130, + 274, + 481, + 323 + ], + "type": "text", + "content": "(a) Sweep-scan: In this approach, we directly flatten the 3D feature " + }, + { + "bbox": [ + 130, + 274, + 481, + 323 + ], + "type": "inline_equation", + "content": "\\mathbf{z}" + }, + { + "bbox": [ + 130, + 274, + 481, + 323 + ], + "type": "text", + "content": " without considering spatial or temporal continuity. It's worth noting that the flattening process follows the computer hierarchy order, meaning that no continuity is preserved in the flattened representation." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 324, + 481, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 324, + 481, + 384 + ], + "spans": [ + { + "bbox": [ + 130, + 324, + 481, + 384 + ], + "type": "text", + "content": "(b) 3D Zigzag: Compared with the formulation of the 2D zigzag in previous subsections, we follow the similar design to generalize it to 3D Zigzag to keep the continuity in 2D and 3D simultaneously. Potentially, the scheme has much more complexity. We heuristically list 8 schemes as well. However, we empirically find that this scheme will lead to suboptimal optimization." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 385, + 481, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 385, + 481, + 469 + ], + "spans": [ + { + "bbox": [ + 130, + 385, + 481, + 469 + ], + "type": "text", + "content": "(c) Factorized 3D Zigzag = 2D Zigzag + 1D Sweep: To address the suboptimal optimization issue, we propose to factorize the spatial and temporal correlations as separate Mamba blocks. The order of their application can be adjusted as desired, for example, \"sstt\" or \"ststst\", where \"s\" represents the spatial-zigzag Mamba and \"t\" represents the temporal-zigzag Mamba. For a 1D temporal sweep, we simply opt for forward and backward scanning, since there is only one dimension on the time axis." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "spans": [ + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "type": "text", + "content": "Computation Analysis. For a visual sequence " + }, + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "type": "inline_equation", + "content": "\\mathbf{T} \\in \\mathbb{R}^{1 \\times M \\times D}" + }, + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "type": "text", + "content": ", the computation complexity of global self-attention and " + }, + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 130, + 470, + 481, + 507 + ], + "type": "text", + "content": "-direction mamba and our zigzag mamba are as follows:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 212, + 533, + 480, + 546 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 212, + 533, + 480, + 546 + ], + "spans": [ + { + "bbox": [ + 212, + 533, + 480, + 546 + ], + "type": "interline_equation", + "content": "\\zeta (\\text {s e l f - a t t e n t i o n}) = 4 \\mathrm {M D} ^ {2} + 2 \\mathrm {M} ^ {2} \\mathrm {D}, \\tag {4}", + "image_path": "fe736500eb06a4c24748003c78c737aa5a0179c2deb2c636ca151c583b66baa8.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 212, + 549, + 480, + 563 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 212, + 549, + 480, + 563 + ], + "spans": [ + { + "bbox": [ + 212, + 549, + 480, + 563 + ], + "type": "interline_equation", + "content": "\\zeta (\\mathrm {k} - \\text {m a m b a}) = k \\times [ 3 \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} + \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} ^ {2} ], \\tag {5}", + "image_path": "54a97341ad9fcd7ea66f17844c84ffea92d89511ace0c252d7d0b6216f511d58.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 212, + 566, + 480, + 579 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 212, + 566, + 480, + 579 + ], + "spans": [ + { + "bbox": [ + 212, + 566, + 480, + 579 + ], + "type": "interline_equation", + "content": "\\zeta (\\text {z i g z a g}) = 3 \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} + \\mathrm {M} (2 \\mathrm {D}) \\mathrm {N} ^ {2}, \\tag {6}", + "image_path": "4363738eb97d1ff515995e05cf3a693eb7d5d79d4aa11a1504561235005d40b4.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "text", + "content": "where self-attention exhibits quadratic complexity with respect to sequence length M, while Mamba exhibits linear complexity (N is a fixed parameter, set to 16 by default). Here, " + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "text", + "content": " represents the number of scan directions in a single Mamba block. Therefore, " + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "text", + "content": "-mamba and zigzag share linear complexity with respect to self-attention. Moreover, our zigzag method can eliminate the " + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 130, + 594, + 484, + 666 + ], + "type": "text", + "content": " series, further reducing the overall complexity." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 140, + 100 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 181, + 116, + 428, + 239 + ], + "blocks": [ + { + "bbox": [ + 181, + 116, + 428, + 239 + ], + "lines": [ + { + "bbox": [ + 181, + 116, + 428, + 239 + ], + "spans": [ + { + "bbox": [ + 181, + 116, + 428, + 239 + ], + "type": "image", + "image_path": "345ced1a59da24d163dcef484064ca2ed5ecf182938801a4b72ab06f35b5075a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "lines": [ + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "spans": [ + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "type": "text", + "content": "Figure 5: The 3D Video Scan. (a) We illustrate the bidirectional Mamba with the sweep scan, where the spatial and temporal information is treated as a set of tokens with a computer-hierarchy order. (b) For the 3D zigzag-scan, we aim to maximize the potential of Mamba by employing a spatial continuous scan scheme and adopting the optimal zigzag scan solution, as depicted in Figure 3. (c) We further separate the reasoning between spatial and temporal information, resulting in a factorized combination of 2D spatial scan " + }, + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "type": "inline_equation", + "content": "(\\varOmega)" + }, + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "type": "text", + "content": " plus a 1D temporal scan " + }, + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "type": "inline_equation", + "content": "(\\varOmega^{\\prime})" + }, + { + "bbox": [ + 130, + 247, + 482, + 324 + ], + "type": "text", + "content": " scheme." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 348, + 480, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 348, + 480, + 384 + ], + "spans": [ + { + "bbox": [ + 130, + 348, + 480, + 384 + ], + "type": "text", + "content": "Upon completing the design of the Zigzag Mamba network for improved visual inductive-bias integration, we proceed to combine it with a new diffusion framework, as illustrated below." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 131, + 403, + 383, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 403, + 383, + 415 + ], + "spans": [ + { + "bbox": [ + 131, + 403, + 383, + 415 + ], + "type": "text", + "content": "3.3 Diffusion Framework: Stochastic Interpolant" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "spans": [ + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "text", + "content": "Sampling based on vector " + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "inline_equation", + "content": "\\mathbf{v}" + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "text", + "content": " and score " + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "inline_equation", + "content": "\\mathbf{s}" + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "text", + "content": ". Following [3, 96], the time-dependent probability distribution " + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "inline_equation", + "content": "p_t(\\mathbf{x})" + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_t" + }, + { + "bbox": [ + 130, + 422, + 480, + 459 + ], + "type": "text", + "content": " also coincides with the distribution of the reverse-time SDE [6]:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 204, + 467, + 481, + 491 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 204, + 467, + 481, + 491 + ], + "spans": [ + { + "bbox": [ + 204, + 467, + 481, + 491 + ], + "type": "interline_equation", + "content": "d \\mathbf {X} _ {t} = \\mathbf {v} \\left(\\mathbf {X} _ {t}, t\\right) d t + \\frac {1}{2} w _ {t} \\mathbf {s} \\left(\\mathbf {X} _ {t}, t\\right) d t + \\sqrt {w _ {t}} d \\bar {\\mathbf {W}} _ {t}, \\tag {7}", + "image_path": "9b2ba9c8458314b5628f49d19cf78cc2164762d08b7975a4c8e31641b76d90a4.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "spans": [ + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{W}}_t" + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "text", + "content": " is a reverse-time Wiener process, " + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "inline_equation", + "content": "w_{t} > 0" + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "text", + "content": " is an arbitrary time-dependent diffusion coefficient, " + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "inline_equation", + "content": "\\mathbf{s}(\\mathbf{x},t) = \\nabla \\log p_t(\\mathbf{x})" + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "text", + "content": " is the score, and " + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 499, + 482, + 536 + ], + "type": "text", + "content": " is given by the conditional expectation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 214, + 544, + 481, + 573 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 544, + 481, + 573 + ], + "spans": [ + { + "bbox": [ + 214, + 544, + 481, + 573 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathbf {v} (\\mathbf {x}, t) = \\mathbb {E} [ \\dot {\\mathbf {x}} _ {t} | \\mathbf {x} _ {t} = \\mathbf {x} ], \\\\ \\begin{array}{l} \\underline {{- [ - t ] = - t}} \\\\ = \\dot {\\alpha} _ {t} \\mathbb {E} \\left[ \\mathbf {x} _ {*} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right] + \\dot {\\sigma} _ {t} \\mathbb {E} \\left[ \\boldsymbol {\\varepsilon} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right], \\end{array} \\tag {8} \\\\ \\end{array}", + "image_path": "6b0b34f70638eb2038c02533a292edd13e66d9c42697b46f040cfe994a50e01a.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "spans": [ + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "\\alpha_{t}" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": " is a decreasing function of " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "\\sigma_{t}" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": " is an increasing function of " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": ". Here, " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "\\dot{\\alpha}_{t}" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "\\dot{\\sigma}_{t}" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": " denote the time derivatives of " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "\\alpha_{t}" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "inline_equation", + "content": "\\sigma_{t}" + }, + { + "bbox": [ + 130, + 582, + 480, + 605 + ], + "type": "text", + "content": ", respectively." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "text", + "content": "As long as we can estimate the velocity " + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "text", + "content": " and/or score " + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{s}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "text", + "content": " fields, we can utilize it for the sampling process either by probability flow ODE [91] or the reverse-time SDE (7). Solving the reverse SDE (7) backwards in time from " + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{X}_T = \\varepsilon \\sim \\mathcal{N}(0,\\mathbf{I})" + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "text", + "content": " enables generating samples from the approximated data distribution " + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "inline_equation", + "content": "p_0(\\mathbf{x})\\sim p(\\mathbf{x})" + }, + { + "bbox": [ + 130, + 606, + 482, + 666 + ], + "type": "text", + "content": ". During sampling, we can perform direct sampling" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 474, + 92, + 481, + 100 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 151 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 151 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 151 + ], + "type": "text", + "content": "from either ODE or SDEs to balance between sampling speed and fidelity. If we choose to conduct ODE sampling, we can achieve this simply by setting the noise term " + }, + { + "bbox": [ + 130, + 116, + 482, + 151 + ], + "type": "inline_equation", + "content": "\\mathbf{s}" + }, + { + "bbox": [ + 130, + 116, + 482, + 151 + ], + "type": "text", + "content": " to zero." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "spans": [ + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "type": "text", + "content": "In [3], it shows that one of the two quantities " + }, + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "type": "inline_equation", + "content": "\\mathbf{s}_{\\theta}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "type": "inline_equation", + "content": "\\mathbf{v}_{\\theta}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 152, + 482, + 177 + ], + "type": "text", + "content": " needs to be estimated in practice. This follows directly from the constraint" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 226, + 186, + 481, + 214 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 226, + 186, + 481, + 214 + ], + "spans": [ + { + "bbox": [ + 226, + 186, + 481, + 214 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathbf {x} = \\mathbb {E} \\left[ \\mathbf {x} _ {t} \\mid \\mathbf {x} _ {t} = \\mathbf {x} \\right], \\tag {9} \\\\ = \\alpha_ {t} \\mathbb {E} [ \\mathbf {x} _ {*} | \\mathbf {x} _ {t} = \\mathbf {x} ] + \\sigma_ {t} \\mathbb {E} [ \\varepsilon | \\mathbf {x} _ {t} = \\mathbf {x} ], \\\\ \\end{array}", + "image_path": "d386344d30d59ccdbe0d6618f124b6f99167b6c55bd7ff58a04000f23992217a.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "spans": [ + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "type": "text", + "content": "which can be used to re-express the score " + }, + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "type": "inline_equation", + "content": "\\mathbf{s}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "type": "text", + "content": " in terms of the velocity " + }, + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 223, + 481, + 246 + ], + "type": "text", + "content": " as" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 240, + 255, + 481, + 280 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 240, + 255, + 481, + 280 + ], + "spans": [ + { + "bbox": [ + 240, + 255, + 481, + 280 + ], + "type": "interline_equation", + "content": "\\mathbf {s} (\\mathbf {x}, t) = \\sigma_ {t} ^ {- 1} \\frac {\\alpha_ {t} \\mathbf {v} (\\mathbf {x} , t) - \\dot {\\alpha} _ {t} \\mathbf {x}}{\\dot {\\alpha} _ {t} \\sigma_ {t} - \\alpha_ {t} \\dot {\\sigma} _ {t}}. \\tag {10}", + "image_path": "d63b7eac392b82b351c2dff57cbfca3cd1adf30d1d1c8d4acbf31654cf5b479c.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "spans": [ + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "type": "text", + "content": "Thus, " + }, + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "type": "inline_equation", + "content": "\\mathbf{s}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 288, + 481, + 312 + ], + "type": "text", + "content": " can be mutually conversed. We illustrate how to compute them in the following." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "spans": [ + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "text", + "content": "Estimating the score " + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "inline_equation", + "content": "\\mathbf{s}" + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "text", + "content": " and the velocity " + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "inline_equation", + "content": "\\mathbf{v}" + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "text", + "content": ". It has been shown in score-based diffusion models [91] that the score can be estimated parametrically as " + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "inline_equation", + "content": "\\mathbf{s}_{\\theta}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 312, + 481, + 347 + ], + "type": "text", + "content": " using the loss" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 227, + 348, + 481, + 376 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 348, + 481, + 376 + ], + "spans": [ + { + "bbox": [ + 227, + 348, + 481, + 376 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {s}} (\\theta) = \\int_ {0} ^ {T} \\mathbb {E} [ \\| \\sigma_ {t} \\mathbf {s} _ {\\theta} (\\mathbf {x} _ {t}, t) + \\varepsilon \\| ^ {2} ] \\mathrm {d} t. \\tag {11}", + "image_path": "f3f4219010a174dbcbcb6d6dfca12a9126f5dee17d4b7b23e64219317ebf7e36.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "spans": [ + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "type": "text", + "content": "Similarly, the velocity " + }, + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "type": "text", + "content": " can be estimated parametrically as " + }, + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "type": "inline_equation", + "content": "\\mathbf{v}_{\\theta}(\\mathbf{x},t)" + }, + { + "bbox": [ + 130, + 380, + 481, + 403 + ], + "type": "text", + "content": " via the loss" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 209, + 411, + 481, + 439 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 209, + 411, + 481, + 439 + ], + "spans": [ + { + "bbox": [ + 209, + 411, + 481, + 439 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {\\mathrm {v}} (\\theta) = \\int_ {0} ^ {T} \\mathbb {E} [ \\| \\mathbf {v} _ {\\theta} (\\mathbf {x} _ {t}, t) - \\dot {\\alpha} _ {t} \\mathbf {x} _ {*} - \\dot {\\sigma} _ {t} \\boldsymbol {\\varepsilon} \\| ^ {2} ] \\mathrm {d} t, \\tag {12}", + "image_path": "e5367b05355301a728c2133945a112aae4267839115afe8c8f318b4ec1295da5.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 447, + 481, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 447, + 481, + 483 + ], + "spans": [ + { + "bbox": [ + 130, + 447, + 481, + 483 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 130, + 447, + 481, + 483 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 130, + 447, + 481, + 483 + ], + "type": "text", + "content": " represents the Zigzag Mamba network that we described in the previous section, we adopt the linear path for training, due to its simplicity and relatively straight trajectory:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 262, + 483, + 481, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 262, + 483, + 481, + 496 + ], + "spans": [ + { + "bbox": [ + 262, + 483, + 481, + 496 + ], + "type": "interline_equation", + "content": "\\alpha_ {t} = 1 - t, \\quad \\sigma_ {t} = t. \\tag {13}", + "image_path": "8144cde691f5a6795bfb00923438377eab4aaf3831bcc1e758226b6a07d5204c.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 130, + 502, + 481, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 502, + 481, + 552 + ], + "spans": [ + { + "bbox": [ + 130, + 502, + 481, + 552 + ], + "type": "text", + "content": "We note that any time-dependent weight can be included under the integrals in both (11) and (12). These weight factors play a crucial role in score-based models when " + }, + { + "bbox": [ + 130, + 502, + 481, + 552 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 130, + 502, + 481, + 552 + ], + "type": "text", + "content": " becomes large [54, 55]. Thus, they provide a general form that considers both the time-dependent weight and the stochasticity." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 131, + 570, + 225, + 584 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 570, + 225, + 584 + ], + "spans": [ + { + "bbox": [ + 131, + 570, + 225, + 584 + ], + "type": "text", + "content": "4 Experiment" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 131, + 596, + 303, + 609 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 131, + 596, + 303, + 609 + ], + "spans": [ + { + "bbox": [ + 131, + 596, + 303, + 609 + ], + "type": "text", + "content": "4.1 Dataset and Training Detail" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 130, + 617, + 481, + 667 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 481, + 667 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 481, + 667 + ], + "type": "text", + "content": "Image Dataset. To explore the scalability in high resolution, we conduct experiments on the FacesHQ " + }, + { + "bbox": [ + 130, + 617, + 481, + 667 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 130, + 617, + 481, + 667 + ], + "type": "text", + "content": ". The general dataset that we use for training and ablations is FacesHQ, a compilation of CelebA-HQ [110] and FFHQ [53], as employed in previous work such as [26, 28]." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 101 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 141, + 157, + 470, + 239 + ], + "blocks": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "lines": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "spans": [ + { + "bbox": [ + 130, + 114, + 482, + 148 + ], + "type": "text", + "content": "Table 1: Ablation of Scanning Scheme Number. We evaluate various zigzag scanning schemes. Starting from a simple \"Sweep\" baseline, we consistently observe improvements as more schemes are implemented." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 141, + 157, + 470, + 239 + ], + "lines": [ + { + "bbox": [ + 141, + 157, + 470, + 239 + ], + "spans": [ + { + "bbox": [ + 141, + 157, + 470, + 239 + ], + "type": "table", + "html": "
MultiModal-CelebA-256MultiModal-CelebA-512
FID5k ↓FDD5k ↓KID5k ↓FID5k ↓FDD5k ↓KID5k ↓
Sweep158.175.90.169162.3103.20.203
Zigzag-165.747.80.051121.078.00.113
Zigzag-254.745.50.04196.059.50.079
Zigzag-845.526.40.01134.929.50.023
", + "image_path": "039b2851b49194a91ceadcafad76319c755eb2833a5a395c8dcb64819c471487.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "spans": [ + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "type": "text", + "content": "Video Dataset. UCF101 dataset consists of 13,320 video clips, which are classified into 101 categories. The total length of these video clips is over 27 hours. All these videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of " + }, + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "type": "inline_equation", + "content": "320 \\times 240" + }, + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "type": "text", + "content": ". We randomly sample continuous 16 frames and resize the frames to " + }, + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 130, + 260, + 479, + 319 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 320, + 480, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 320, + 480, + 416 + ], + "spans": [ + { + "bbox": [ + 130, + 320, + 480, + 416 + ], + "type": "text", + "content": "Training Details. We uniformly use AdamW [72] optimizer with " + }, + { + "bbox": [ + 130, + 320, + 480, + 416 + ], + "type": "inline_equation", + "content": "1e - 4" + }, + { + "bbox": [ + 130, + 320, + 480, + 416 + ], + "type": "text", + "content": " learning rate. For extracting latent features, we employ off-the-shelf VAE encoders. To mitigate computational costs, we adopted a mixed-precision training approach. Additionally, we applied gradient clipping with a threshold of 2.0 and a weight decay of 0.01 to prevent NaN occurrences during Mamba training. Most of our experiments were conducted on 4 A100 GPUs, with scalability exploration extended to 16 and 32 A100 GPUs. For sampling, we adopt the ODE sampling for speed consideration. For further details, please refer to the Appendix 8.8." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 434, + 237, + 445 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 434, + 237, + 445 + ], + "spans": [ + { + "bbox": [ + 132, + 434, + 237, + 445 + ], + "type": "text", + "content": "4.2 Ablation Study" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 180, + 517, + 430, + 559 + ], + "blocks": [ + { + "bbox": [ + 130, + 474, + 480, + 506 + ], + "lines": [ + { + "bbox": [ + 130, + 474, + 480, + 506 + ], + "spans": [ + { + "bbox": [ + 130, + 474, + 480, + 506 + ], + "type": "text", + "content": "Table 2: Ablation about Position Embedding (PE) on unconditional CelebA dataset " + }, + { + "bbox": [ + 130, + 474, + 480, + 506 + ], + "type": "inline_equation", + "content": "(256^{2})" + }, + { + "bbox": [ + 130, + 474, + 480, + 506 + ], + "type": "text", + "content": ". To better abate PE and eliminate the conditional signal's influence, we use an unconditional dataset." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 180, + 517, + 430, + 559 + ], + "lines": [ + { + "bbox": [ + 180, + 517, + 430, + 559 + ], + "spans": [ + { + "bbox": [ + 180, + 517, + 430, + 559 + ], + "type": "table", + "html": "
FID/FDD ↓No PECosine PELearnable PE
VisionMamba [123]21.33/21.0018.47/19.9016.38/18.20
ZigMa14.27/18.0014.04/17.9113.32/17.40
", + "image_path": "cb1be07444cafab03559328172dc24760403757ed13e7c7c7daadab899e761e7.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "type": "text", + "content": "Scan Scheme Ablation. We provide several important findings based on our ablation studies on MultiModal-CelebA dataset in various resolutions in Table 1. Firstly, switching the scanning scheme from sweep to zigzag led to some gains. Secondly, as we increased the zigzag scheme from 1 to 8, we saw consistent gains. This indicates that alternating the scanning scheme in various blocks can be beneficial. Finally, the relative gain between Zigzag-1 and Zigzag-8 is more prominent at higher resolutions (" + }, + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 130, + 581, + 482, + 666 + ], + "type": "text", + "content": ", or longer sequence token number)" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 136, + 117, + 282, + 178 + ], + "blocks": [ + { + "bbox": [ + 136, + 117, + 282, + 178 + ], + "lines": [ + { + "bbox": [ + 136, + 117, + 282, + 178 + ], + "spans": [ + { + "bbox": [ + 136, + 117, + 282, + 178 + ], + "type": "image", + "image_path": "53bc43796d12c2f7e7e06e43eb902b0f63f951f7460839d0059c4e0db032d056.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 166, + 184, + 268, + 194 + ], + "lines": [ + { + "bbox": [ + 166, + 184, + 268, + 194 + ], + "spans": [ + { + "bbox": [ + 166, + 184, + 268, + 194 + ], + "type": "text", + "content": "(a) FPS v.s. Patch Number." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 315, + 116, + 462, + 178 + ], + "blocks": [ + { + "bbox": [ + 315, + 116, + 462, + 178 + ], + "lines": [ + { + "bbox": [ + 315, + 116, + 462, + 178 + ], + "spans": [ + { + "bbox": [ + 315, + 116, + 462, + 178 + ], + "type": "image", + "image_path": "6d89946986831453e9d1a17ba75193683e812b5ef23bc2269e03b91b2d2a4f77.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 329, + 183, + 465, + 194 + ], + "lines": [ + { + "bbox": [ + 329, + 183, + 465, + 194 + ], + "spans": [ + { + "bbox": [ + 329, + 183, + 465, + 194 + ], + "type": "text", + "content": "(b) GPU Memory v.s. Patch Number." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 140, + 212, + 286, + 277 + ], + "blocks": [ + { + "bbox": [ + 140, + 212, + 286, + 277 + ], + "lines": [ + { + "bbox": [ + 140, + 212, + 286, + 277 + ], + "spans": [ + { + "bbox": [ + 140, + 212, + 286, + 277 + ], + "type": "image", + "image_path": "c8c1fbb9e3be3d50e53a759e858d96d168a1a691796565422b0a1d96a507a810.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 137, + 281, + 297, + 290 + ], + "lines": [ + { + "bbox": [ + 137, + 281, + 297, + 290 + ], + "spans": [ + { + "bbox": [ + 137, + 281, + 297, + 290 + ], + "type": "text", + "content": "(c) Order Receptive Field v.s. GPU Memory." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 321, + 212, + 465, + 277 + ], + "blocks": [ + { + "bbox": [ + 321, + 212, + 465, + 277 + ], + "lines": [ + { + "bbox": [ + 321, + 212, + 465, + 277 + ], + "spans": [ + { + "bbox": [ + 321, + 212, + 465, + 277 + ], + "type": "image", + "image_path": "748520714e1a920464774140e40b899cb47a9374c53f83117e91600e5bb580e3.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 332, + 281, + 460, + 290 + ], + "lines": [ + { + "bbox": [ + 332, + 281, + 460, + 290 + ], + "spans": [ + { + "bbox": [ + 332, + 281, + 460, + 290 + ], + "type": "text", + "content": "(d) Order Receptive Field v.s. FPS." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 131, + 299, + 482, + 344 + ], + "lines": [ + { + "bbox": [ + 131, + 299, + 482, + 344 + ], + "spans": [ + { + "bbox": [ + 131, + 299, + 482, + 344 + ], + "type": "text", + "content": "Figure 6: (a, b).GPU Memory usage and FPS between our method and transformer-based methods(U-VIT [9] and DiT [80]). (c). Order Receptive Field and GPU memory (d). Order Receptive Field and FPS. Order Receptive Field denotes how many scan paths we consider in our network design." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 368, + 480, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 368, + 480, + 403 + ], + "spans": [ + { + "bbox": [ + 130, + 368, + 480, + 403 + ], + "type": "text", + "content": "compared to lower resolutions (" + }, + { + "bbox": [ + 130, + 368, + 480, + 403 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 130, + 368, + 480, + 403 + ], + "type": "text", + "content": ", or shorter sequence token number), this shows the great potential and more efficient inductive-bias incorporation in longer sequence number." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 405, + 482, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 405, + 482, + 523 + ], + "spans": [ + { + "bbox": [ + 130, + 405, + 482, + 523 + ], + "type": "text", + "content": "Ablation about Position Embedding. As shown in Table 2, the learnable embedding performs better than the Sinusoidal embedding, which in turn performs better than no position embedding. In various cases, our zigzag method surpasses the baselines. Notably, our performance remains almost unchanged whether we use the Sinusoidal position embedding or no position embedding. This suggests that our method can better incorporate spatial inductive-bias compared to our baseline. Finally, using the learnable position embedding provides further, albeit marginal, gains suggesting that better position embedding exists even within our zigzag scan scheme. We find that [79] shares the same conclusion as us in video-related tasks." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "spans": [ + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "type": "text", + "content": "Ablation study about the Network and FPS/GPU-Memory. In Figure 6 (a,b), we analyze the forward speed and GPU memory usage while varying the global patch dimensions from " + }, + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "type": "inline_equation", + "content": "32 \\times 32" + }, + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "type": "inline_equation", + "content": "196 \\times 196" + }, + { + "bbox": [ + 130, + 525, + 482, + 645 + ], + "type": "text", + "content": ". For the speed analysis, we report Frame Per Second (FPS) instead of FLOPS, as FPS provides a more explicit and appropriate evaluation of speed2. For simplicity, we uniformly apply the zigzag-1 Mamba scan scheme and use batch size=1 and patch size=1 on an A100 GPU with 80GB memory. It's worth noting that all methods share nearly identical parameter numbers for fair comparison. We primarily compare our method with two popular transformer-based Diffusion backbones, U-ViT [9] and DiT [80]. It is evident that our method achieves the best FPS and GPU" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 133, + 653, + 468, + 665 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 653, + 468, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 653, + 468, + 665 + ], + "type": "text", + "content": "2 https://github.com/state-spaces/mamba/issues/110#issuecomment-1916464012" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "spans": [ + { + "bbox": [ + 130, + 116, + 482, + 164 + ], + "type": "text", + "content": "utilization when gradually increasing the patching number. U-ViT demonstrates the worst performance, even exceeds the memory bounds when the patch number is 196. Surprisingly, DiT's GPU utilization is close to our method, which supports our backbone choice of DiT from a practical perspective." + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 136, + 280, + 298, + 334 + ], + "blocks": [ + { + "bbox": [ + 132, + 203, + 301, + 269 + ], + "lines": [ + { + "bbox": [ + 132, + 203, + 301, + 269 + ], + "spans": [ + { + "bbox": [ + 132, + 203, + 301, + 269 + ], + "type": "text", + "content": "Table 3: Main result on FacesHQ-1024 dataset with 4,094 tokens in latent space and " + }, + { + "bbox": [ + 132, + 203, + 301, + 269 + ], + "type": "inline_equation", + "content": "\\mathbf{bs} = \\mathbf{512}" + }, + { + "bbox": [ + 132, + 203, + 301, + 269 + ], + "type": "text", + "content": ". Our method can outperform the baseline and can achieve even better results when the training scale is increased." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 136, + 280, + 298, + 334 + ], + "lines": [ + { + "bbox": [ + 136, + 280, + 298, + 334 + ], + "spans": [ + { + "bbox": [ + 136, + 280, + 298, + 334 + ], + "type": "table", + "html": "
MethodFID5k↓FDD5k↓
VisionMamba [123]51.166.3
ZigMa37.850.5
ZigMa bs × 226.631.2
", + "image_path": "4680e9454e521872956e603986c45c474974f695366fbab446c6d986d7f782d0.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 138, + 381, + 294, + 425 + ], + "blocks": [ + { + "bbox": [ + 132, + 338, + 301, + 371 + ], + "lines": [ + { + "bbox": [ + 132, + 338, + 301, + 371 + ], + "spans": [ + { + "bbox": [ + 132, + 338, + 301, + 371 + ], + "type": "text", + "content": "Table 5: Transformer-based methods comparison on unconditional CelebA256." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 138, + 381, + 294, + 425 + ], + "lines": [ + { + "bbox": [ + 138, + 381, + 294, + 425 + ], + "spans": [ + { + "bbox": [ + 138, + 381, + 294, + 425 + ], + "type": "table", + "html": "
MethodFID↓Memory(G) ↓FLOPS(G) ↓
U-ViT14.5035.1012.5
DiT14.6429.205.5
ZigMa14.2717.805.2
", + "image_path": "eb6331090f8407c3cff999601c273569dfcedc56f997b1b76806fe12e95073a6.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 341, + 274, + 462, + 338 + ], + "blocks": [ + { + "bbox": [ + 321, + 198, + 482, + 263 + ], + "lines": [ + { + "bbox": [ + 321, + 198, + 482, + 263 + ], + "spans": [ + { + "bbox": [ + 321, + 198, + 482, + 263 + ], + "type": "text", + "content": "Table 4: Main Results on MS-COCO dataset with " + }, + { + "bbox": [ + 321, + 198, + 482, + 263 + ], + "type": "inline_equation", + "content": "\\mathrm{bs} = {256}" + }, + { + "bbox": [ + 321, + 198, + 482, + 263 + ], + "type": "text", + "content": " . Our method consistently outperforms the baseline. ZigMa with 8 scans performs much better compared with the baseline." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 341, + 274, + 462, + 338 + ], + "lines": [ + { + "bbox": [ + 341, + 274, + 462, + 338 + ], + "spans": [ + { + "bbox": [ + 341, + 274, + 462, + 338 + ], + "type": "table", + "html": "
MethodFID5k↓
Sweep195.1
Zigzag-173.1
VisionMamba [123]60.2
Zigzag-841.8
", + "image_path": "8b2b757f5510637d9e945f15261b0a6600a3876f45f3173c99b4c4189453698b.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 334, + 371, + 476, + 425 + ], + "blocks": [ + { + "bbox": [ + 329, + 339, + 481, + 360 + ], + "lines": [ + { + "bbox": [ + 329, + 339, + 481, + 360 + ], + "spans": [ + { + "bbox": [ + 329, + 339, + 481, + 360 + ], + "type": "text", + "content": "Table 6: Video Scan Scheme on UCF101 dataset with " + }, + { + "bbox": [ + 329, + 339, + 481, + 360 + ], + "type": "inline_equation", + "content": "\\mathrm{bs} = {32}" + }, + { + "bbox": [ + 329, + 339, + 481, + 360 + ], + "type": "text", + "content": " ." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 334, + 371, + 476, + 425 + ], + "lines": [ + { + "bbox": [ + 334, + 371, + 476, + 425 + ], + "spans": [ + { + "bbox": [ + 334, + 371, + 476, + 425 + ], + "type": "table", + "html": "
MethodFrame-FID5k↓FVD5k↓
Bidirection [123]256.1320.2
3D Zigzag238.1282.3
Our216.1210.2
Bidirection [123] bs×4146.2201.1
ZigMa bs×4121.2140.1
", + "image_path": "58538241742d093dabfc56d6b2729acd18265eceac8e3add2b0a1e3f9368f047.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 460, + 482, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 460, + 482, + 521 + ], + "spans": [ + { + "bbox": [ + 130, + 460, + 482, + 521 + ], + "type": "text", + "content": "Order Receptive Field. We propose a new concept in Mamba-based structure for multidimensional data. Given that various spatially-continuous zigzag paths may exist in multidimensional data, we introduce the term Order Receptive Field which denotes the number of zigzag paths explicitly employed in the network design." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 527, + 493, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 527, + 493, + 612 + ], + "spans": [ + { + "bbox": [ + 130, + 527, + 493, + 612 + ], + "type": "text", + "content": "Ablation study about the Order Receptive Field and FPS/GPU-Memory. As depicted in Fig. 6 (c,d), Zigzag Mamba consistently maintains its GPU memory consumption and FPS rate, even with a gradually increasing Order Receptive Field. In contrast, our primary baseline, Parallel Mamba, along with variants like Bidirectional Mamba and Vision Mamba [70, 123], experience a consistent decrease in FPS due to increased parameters. Notably, Zigzag Mamba, with an Order Receptive Field of 8, can perform faster without altering parameters." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 617, + 482, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 617, + 482, + 665 + ], + "spans": [ + { + "bbox": [ + 130, + 617, + 482, + 665 + ], + "type": "text", + "content": "Comparison with transformer-based methods. We show the result in Table 5 on unconditional generation task. Our method achieves performance comparable to Transformer-based methods, with significantly less memory consumption and fewer FLOPS." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 480, + 100 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 223, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 223, + 126 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 223, + 126 + ], + "type": "text", + "content": "4.3 Main Result" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "spans": [ + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "text", + "content": "Main Result on " + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "text", + "content": " FacesHQ. To elaborate on the scalability of our method within the Mamba and Stochastic Interpolant framework, we provide comparisons on a high-resolution dataset (" + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "inline_equation", + "content": "1024 \\times 1024" + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "text", + "content": " FacesHQ) in Table 3. Our primary comparison is against Bidirectional Mamba, a commonly used solution for applying Mamba to 2D image data [70, 123]. With the aim of investigating Mamba's scalability in large resolutions up to 1,024, we employ the diffusion model on the latent space of " + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "inline_equation", + "content": "128 \\times 128" + }, + { + "bbox": [ + 130, + 133, + 482, + 289 + ], + "type": "text", + "content": " with a patch size of 2, resulting in 4,096 tokens. The network is trained on 16 A100 GPUs. Notably, our method demonstrates superior results compared to Bidirectional Mamba. Details regarding loss, FID curves, and visualization can be found in the Appendix. While constrained by GPU resource limitations, preventing longer training duration, we anticipate consistent outperformance of Bidirectional Mamba with extended training duration." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 289, + 482, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 289, + 482, + 397 + ], + "spans": [ + { + "bbox": [ + 130, + 289, + 482, + 397 + ], + "type": "text", + "content": "COCO dataset. To further compare the performance of our method, we also evaluate it on the more complex and common dataset MS COCO. We compare with the Bidirection Mamba as the baseline in Table 4. It should be noted that all methods share nearly identical parameter numbers for fair comparison. We trained all methods using 16 A100 GPUs. please check Appendix 8.8 for details. As depicted in Table 4, our Zigzag-8 method outperforms Bidirectional Mamba as well as Zigzag-1. This suggests that amortizing various scanning schemes can yield significant improvements, attributed to better incorporation of the inductive bias for 2D images in Mamba." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 397, + 482, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 397, + 482, + 518 + ], + "spans": [ + { + "bbox": [ + 130, + 397, + 482, + 518 + ], + "type": "text", + "content": "UCF101 dataset. In Table 6, we present our results on the UCF101 dataset, training all methods using 4 A100 GPUs, with further scalability exploration conducted using 16 A100 GPUs. We mainly compare our method consistently with Vision Mamba [123]. For the choice of the 3D Zigzag Mamba, please refer to Appendix 8.8. For Factorized 3D Zigzag Mamba in video processing, we deploy the sst scheme for factorizing spatial and temporal modeling. This scheme prioritizes spatial information complexity over temporal information, hypothesizing that redundancy exists in the temporal domain. Our results consistently demonstrate the superior performance of our method across various scenarios, underscoring the intricacy and effectiveness of our approach." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 534, + 220, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 534, + 220, + 547 + ], + "spans": [ + { + "bbox": [ + 132, + 534, + 220, + 547 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "spans": [ + { + "bbox": [ + 130, + 557, + 482, + 666 + ], + "type": "text", + "content": "In this paper, we present the Zigzag Mamba Diffusion Model, developed within the Stochastic Interpolant framework. Our initial focus is on addressing the critical issue of spatial continuity. We then devise a Zigzag Mamba block with heterogeneous layerwise scan to better utilize the inductive bias in 2D images. Further, we factorize the 3D Mamba into 2D and 1D Zigzag Mamba to facilitate optimization. We empirically design various ablation studies to examine different factors. This approach allows for a more in-depth exploration of the Stochastic Interpolant theory. We hope our endeavor can inspire further exploration in the Mamba network design." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "spans": [ + { + "bbox": [ + 133, + 114, + 246, + 129 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 130, + 140, + 482, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 140, + 482, + 236 + ], + "spans": [ + { + "bbox": [ + 130, + 140, + 482, + 236 + ], + "type": "text", + "content": "This project has been supported by the German Federal Ministry for Economic Affairs and Climate Action within the project \"NXT GEN AI METHODS - Generative Methoden für Perzeption, Prädiktion und Planung\", the bidt project KLIMA-MEMES, Bayer AG, and the German Research Foundation (DFG) project 421703927. The authors gratefully acknowledge the Gauss Center for Supercomputing for providing compute through the NIC on JUWELS at JSC and the HPC resources supplied by the Erlangen National High Performance Computing Center (NHR@FAU funded by DFG)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 255, + 197, + 267 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 255, + 197, + 267 + ], + "spans": [ + { + "bbox": [ + 133, + 255, + 197, + 267 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 280, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 141, + 280, + 481, + 301 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 280, + 481, + 301 + ], + "spans": [ + { + "bbox": [ + 141, + 280, + 481, + 301 + ], + "type": "text", + "content": "1. Agarwal, N., Suo, D., Chen, X., Hazan, E.: Spectral state space models. arXiv (2023) 28" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 141, + 303, + 481, + 324 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 303, + 481, + 324 + ], + "spans": [ + { + "bbox": [ + 141, + 303, + 481, + 324 + ], + "type": "text", + "content": "2. Ahamed, M.A., Cheng, Q.: Mambatab: A simple yet effective approach for handling tabular data. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 141, + 324, + 481, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 324, + 481, + 346 + ], + "spans": [ + { + "bbox": [ + 141, + 324, + 481, + 346 + ], + "type": "text", + "content": "3. Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E.: Stochastic interpolants: A unifying framework for flows and diffusions. arXiv (2023) 2, 4, 9, 10" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 141, + 346, + 481, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 346, + 481, + 368 + ], + "spans": [ + { + "bbox": [ + 141, + 346, + 481, + 368 + ], + "type": "text", + "content": "4. Albergo, M.S., Vanden-Eijnden, E.: Building normalizing flows with stochastic interpolants. arXiv (2022) 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 141, + 369, + 481, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 369, + 481, + 390 + ], + "spans": [ + { + "bbox": [ + 141, + 369, + 481, + 390 + ], + "type": "text", + "content": "5. Ali, A., Zimerman, I., Wolf, L.: The hidden attention of mamba models. arXiv (2024) 28" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 141, + 391, + 481, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 391, + 481, + 412 + ], + "spans": [ + { + "bbox": [ + 141, + 391, + 481, + 412 + ], + "type": "text", + "content": "6. Anderson, B.D.: Reverse-time diffusion equation models. Stochastic Processes and their Applications (1982) 9" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 141, + 412, + 481, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 412, + 481, + 434 + ], + "spans": [ + { + "bbox": [ + 141, + 412, + 481, + 434 + ], + "type": "text", + "content": "7. Anthony, Q., Tokpanov, Y., Glorioso, P., Millidge, B.: Blackmamba: Mixture of experts for state-space models. arXiv (2024) 28" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 141, + 434, + 481, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 434, + 481, + 468 + ], + "spans": [ + { + "bbox": [ + 141, + 434, + 481, + 468 + ], + "type": "text", + "content": "8. Ao, S., Zhao, W., Han, X., Yang, C., Liu, Z., Shi, C., Sun, M., Wang, S., Su, T.: Burstattention: An efficient distributed attention framework for extremely long sequences. arXiv (2024) 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 141, + 468, + 481, + 489 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 468, + 481, + 489 + ], + "spans": [ + { + "bbox": [ + 141, + 468, + 481, + 489 + ], + "type": "text", + "content": "9. Bao, F., Li, C., Cao, Y., Zhu, J.: All are worth words: a vit backbone for score-based diffusion models. CVPR (2023) 1, 3, 5, 12, 23" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 490, + 481, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 490, + 481, + 521 + ], + "spans": [ + { + "bbox": [ + 138, + 490, + 481, + 521 + ], + "type": "text", + "content": "10. Bao, F., Nie, S., Xue, K., Li, C., Pu, S., Wang, Y., Yue, G., Cao, Y., Su, H., Zhu, J.: One transformer fits all distributions in multi-modal diffusion at scale. arXiv (2023) 1, 3, 6" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "spans": [ + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "type": "text", + "content": "11. Beck, M., Poppel, K., Spanring, M., Auer, A., Prudnikova, O., Kopp, M., Klambauer, G., Brandstetter, J., Hochreiter, S.: xlstm: Extended long short-term memory (2024) 22" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 555, + 481, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 555, + 481, + 577 + ], + "spans": [ + { + "bbox": [ + 138, + 555, + 481, + 577 + ], + "type": "text", + "content": "12. Behrouz, A., Hashemi, F.: Graph mamba: Towards learning on graphs with state space models. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 578, + 481, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 578, + 481, + 598 + ], + "spans": [ + { + "bbox": [ + 138, + 578, + 481, + 598 + ], + "type": "text", + "content": "13. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv (2020) 1, 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 600, + 481, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 600, + 481, + 632 + ], + "spans": [ + { + "bbox": [ + 138, + 600, + 481, + 632 + ], + "type": "text", + "content": "14. Ben-Hamu, H., Cohen, S., Bose, J., Amos, B., Grover, A., Nickel, M., Chen, R.T., Lipman, Y.: Matching normalizing flows and probability paths on manifolds. In: ICML (2022) 4" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 632, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 632, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 632, + 481, + 665 + ], + "type": "text", + "content": "15. Brandon, W., Nrusimha, A., Qian, K., Ankner, Z., Jin, T., Song, Z., Ragan-Kelley, J.: Striped attention: Faster ring attention for causal transformers. arXiv preprint arXiv:2311.09431 (2023) 2" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 92, + 480, + 100 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 137, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 137, + 116, + 479, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 116, + 479, + 138 + ], + "spans": [ + { + "bbox": [ + 137, + 116, + 479, + 138 + ], + "type": "text", + "content": "16. Chefer, H., Gur, S., Wolf, L.: Transformer interpretability beyond attention visualization. In: CVPR (2021) 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 140, + 480, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 140, + 480, + 161 + ], + "spans": [ + { + "bbox": [ + 138, + 140, + 480, + 161 + ], + "type": "text", + "content": "17. Chen, R.T., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. NeurIPS (2018) 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 162, + 480, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 162, + 480, + 194 + ], + "spans": [ + { + "bbox": [ + 138, + 162, + 480, + 194 + ], + "type": "text", + "content": "18. Chen, S., Xu, M., Ren, J., Cong, Y., He, S., Xie, Y., Sinha, A., Luo, P., Xiang, T., Perez-Rua, J.M.: Gentron: Delving deep into diffusion transformers for image and video generation. arXiv (2023) 3, 6" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 195, + 480, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 195, + 480, + 217 + ], + "spans": [ + { + "bbox": [ + 138, + 195, + 480, + 217 + ], + "type": "text", + "content": "19. Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers. arXiv (2019) 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 218, + 480, + 250 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 218, + 480, + 250 + ], + "spans": [ + { + "bbox": [ + 138, + 218, + 480, + 250 + ], + "type": "text", + "content": "20. Choromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al.: Rethinking attention with performers. arXiv (2020) 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 251, + 480, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 251, + 480, + 283 + ], + "spans": [ + { + "bbox": [ + 138, + 251, + 480, + 283 + ], + "type": "text", + "content": "21. Crowson, K., Baumann, S.A., Birch, A., Abraham, T.M., Kaplan, D.Z., Shippole, E.: Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers. arXiv (2024) 29" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 285, + 480, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 285, + 480, + 306 + ], + "spans": [ + { + "bbox": [ + 138, + 285, + 480, + 306 + ], + "type": "text", + "content": "22. Dao, Q., Phung, H., Nguyen, B., Tran, A.: Flow matching in latent space. arXiv (2023) 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 308, + 480, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 308, + 480, + 328 + ], + "spans": [ + { + "bbox": [ + 138, + 308, + 480, + 328 + ], + "type": "text", + "content": "23. Dao, T., Fu, D., Ermon, S., Rudra, A., Ré, C.: Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS (2022) 2, 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 330, + 480, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 330, + 480, + 361 + ], + "spans": [ + { + "bbox": [ + 138, + 330, + 480, + 361 + ], + "type": "text", + "content": "24. Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A.P., Caron, M., Geirhos, R., Alabdulmohsin, I., et al.: Scaling vision transformers to 22 billion parameters. In: ICML (2023) 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 363, + 480, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 363, + 480, + 396 + ], + "spans": [ + { + "bbox": [ + 138, + 363, + 480, + 396 + ], + "type": "text", + "content": "25. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021) 23, 27" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 397, + 480, + 418 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 397, + 480, + 418 + ], + "spans": [ + { + "bbox": [ + 138, + 397, + 480, + 418 + ], + "type": "text", + "content": "26. Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: CVPR (2021) 10" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 419, + 480, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 419, + 480, + 440 + ], + "spans": [ + { + "bbox": [ + 138, + 419, + 480, + 440 + ], + "type": "text", + "content": "27. Fei, Z., Fan, M., Yu, C., Huang, J.: Scalable diffusion models with state space backbone. arXiv (2024) 3, 4, 28" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 441, + 480, + 463 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 441, + 480, + 463 + ], + "spans": [ + { + "bbox": [ + 138, + 441, + 480, + 463 + ], + "type": "text", + "content": "28. Fischer, J.S., Gui, M., Ma, P., Stracke, N., Baumann, S.A., Ommer, B.: Boosting latent diffusion with flow matching. ECCV (2024) 4, 10" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 464, + 480, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 464, + 480, + 485 + ], + "spans": [ + { + "bbox": [ + 138, + 464, + 480, + 485 + ], + "type": "text", + "content": "29. Fu, D.Y., Dao, T., Saab, K.K., Thomas, A.W., Rudra, A., Ré, C.: Hungry hungry hippos: Towards language modeling with state space models. arXiv (2022) 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 487, + 480, + 508 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 487, + 480, + 508 + ], + "spans": [ + { + "bbox": [ + 138, + 487, + 480, + 508 + ], + "type": "text", + "content": "30. Fuest, M., Ma, P., Gui, M., Fischer, J.S., Hu, V.T., Ommer, B.: Diffusion models and representation learning: A survey. arXiv preprint arXiv:2407.00783 (2024) 1" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 509, + 480, + 540 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 509, + 480, + 540 + ], + "spans": [ + { + "bbox": [ + 138, + 509, + 480, + 540 + ], + "type": "text", + "content": "31. Gong, H., Kang, L., Wang, Y., Wan, X., Li, H.: nnmamba: 3d biomedical image segmentation, classification and landmark detection with state space model. arXiv (2024) 28" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 543, + 480, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 543, + 480, + 563 + ], + "spans": [ + { + "bbox": [ + 138, + 543, + 480, + 563 + ], + "type": "text", + "content": "32. Gong, J., Foo, L.G., Fan, Z., Ke, Q., Rahmani, H., Liu, J.: Diffpose: Toward more reliable 3d pose estimation. In: CVPR (2023) 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 565, + 480, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 565, + 480, + 586 + ], + "spans": [ + { + "bbox": [ + 138, + 565, + 480, + 586 + ], + "type": "text", + "content": "33. Gu, A., Dao, T.: Mamba: Linear-time sequence modeling with selective state spaces. CoLM (2024) 2, 3, 4, 5" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 138, + 588, + 480, + 609 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 588, + 480, + 609 + ], + "spans": [ + { + "bbox": [ + 138, + 588, + 480, + 609 + ], + "type": "text", + "content": "34. Gu, A., Goel, K., Gupta, A., Ré, C.: On the parameterization and initialization of diagonal state space models. NeurIPS (2022) 2, 4, 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 138, + 610, + 480, + 631 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 610, + 480, + 631 + ], + "spans": [ + { + "bbox": [ + 138, + 610, + 480, + 631 + ], + "type": "text", + "content": "35. Gu, A., Goel, K., Ré, C.: Efficiently modeling long sequences with structured state spaces (2021) 2, 4, 5" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 138, + 632, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 632, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 632, + 480, + 665 + ], + "type": "text", + "content": "36. Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., Ré, C.: Combining recurrent, convolutional, and continuous-time models with linear state space layers. NeurIPS (2021) 2, 5" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 138, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 138, + 116, + 480, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 116, + 480, + 149 + ], + "spans": [ + { + "bbox": [ + 138, + 116, + 480, + 149 + ], + "type": "text", + "content": "37. Gui, M., Fischer, J.S., Prestel, U., Ma, P., Kotovenko, D., Grebenkova, O., Baumann, S.A., Hu, V.T., Ommer, B.: Depthfm: Fast monocular depth estimation with flow matching. arXiv preprint arXiv:2403.13788 (2024) 4" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 150, + 480, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 150, + 480, + 171 + ], + "spans": [ + { + "bbox": [ + 138, + 150, + 480, + 171 + ], + "type": "text", + "content": "38. Guo, H., Li, J., Dai, T., Ouyang, Z., Ren, X., Xia, S.T.: Mambair: A simple baseline for image restoration with state-space model. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 172, + 480, + 193 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 172, + 480, + 193 + ], + "spans": [ + { + "bbox": [ + 138, + 172, + 480, + 193 + ], + "type": "text", + "content": "39. Gupta, A., Gu, A., Berant, J.: Diagonal state spaces are as effective as structured state spaces. NeurIPS (2022) 2, 4, 5" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 194, + 480, + 225 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 194, + 480, + 225 + ], + "spans": [ + { + "bbox": [ + 138, + 194, + 480, + 225 + ], + "type": "text", + "content": "40. He, W., Han, K., Tang, Y., Wang, C., Yang, Y., Guo, T., Wang, Y.: Densemamba: State space models with dense hidden connection for efficient large language models. arXiv (2024) 28" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 227, + 480, + 248 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 227, + 480, + 248 + ], + "spans": [ + { + "bbox": [ + 138, + 227, + 480, + 248 + ], + "type": "text", + "content": "41. He, X., Cao, K., Yan, K., Li, R., Xie, C., Zhang, J., Zhou, M.: Pan-mamba: Effective pan-sharpening with state space model. arXiv (2024) 28" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 249, + 480, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 249, + 480, + 270 + ], + "spans": [ + { + "bbox": [ + 138, + 249, + 480, + 270 + ], + "type": "text", + "content": "42. Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., Cohen-Or, D.: Prompt-to-prompt image editing with cross attention control. arXiv (2022) 8" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 271, + 480, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 271, + 480, + 292 + ], + "spans": [ + { + "bbox": [ + 138, + 271, + 480, + 292 + ], + "type": "text", + "content": "43. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: NeurIPS (2020) 2, 3, 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 293, + 480, + 313 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 293, + 480, + 313 + ], + "spans": [ + { + "bbox": [ + 138, + 293, + 480, + 313 + ], + "type": "text", + "content": "44. Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., Fleet, D.J.: Video diffusion models. In: ARXIV (2022) 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 315, + 480, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 315, + 480, + 335 + ], + "spans": [ + { + "bbox": [ + 138, + 315, + 480, + 335 + ], + "type": "text", + "content": "45. Hu, V.T., Chen, Y., Caron, M., Asano, Y.M., Snoek, C.G., Ommer, B.: Guided diffusion from self-supervised diffusion features. In: ARXIV (2023) 1" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 336, + 480, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 336, + 480, + 369 + ], + "spans": [ + { + "bbox": [ + 138, + 336, + 480, + 369 + ], + "type": "text", + "content": "46. Hu, V.T., Wu, D., Asano, Y., Mettes, P., Fernando, B., Ommer, B., Snoek, C.: Flow matching for conditional text generation in a few sampling steps pp. 380-392 (2024) 4" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 369, + 480, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 369, + 480, + 402 + ], + "spans": [ + { + "bbox": [ + 138, + 369, + 480, + 402 + ], + "type": "text", + "content": "47. Hu, V.T., Yin, W., Ma, P., Chen, Y., Fernando, B., Asano, Y.M., Gavves, E., Mettes, P., Ommer, B., Snoek, C.G.: Motion flow matching for human motion synthesis and editing. In: ARXIV (2023) 4" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 403, + 480, + 424 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 403, + 480, + 424 + ], + "spans": [ + { + "bbox": [ + 138, + 403, + 480, + 424 + ], + "type": "text", + "content": "48. Hu, V.T., Zhang, D.W., Asano, Y.M., Burghouts, G.J., Snoek, C.G.M.: Self-guided diffusion models. In: CVPR (2023) 1" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 425, + 480, + 456 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 425, + 480, + 456 + ], + "spans": [ + { + "bbox": [ + 138, + 425, + 480, + 456 + ], + "type": "text", + "content": "49. Hu, V.T., Zhang, D.W., Mettes, P., Tang, M., Zhao, D., Snoek, C.G.: Latent space editing in transformer-based flow matching. In: ICML 2023 Workshop, New Frontiers in Learning, Control, and Dynamical Systems (2023) 4" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 457, + 480, + 479 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 457, + 480, + 479 + ], + "spans": [ + { + "bbox": [ + 138, + 457, + 480, + 479 + ], + "type": "text", + "content": "50. Huang, Z., Zhou, P., Yan, S., Lin, L.: Scalelong: Towards more stable training of diffusion model via scaling network long skip connection. NeurIPS (2024) 1" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 479, + 480, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 479, + 480, + 511 + ], + "spans": [ + { + "bbox": [ + 138, + 479, + 480, + 511 + ], + "type": "text", + "content": "51. Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., Fu, B.: Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650 (2021) 29" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 512, + 480, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 512, + 480, + 533 + ], + "spans": [ + { + "bbox": [ + 138, + 512, + 480, + 533 + ], + "type": "text", + "content": "52. Karras, T., Aittala, M., Aila, T., Laine, S.: Elucidating the design space of diffusion-based generative models. In: NeurIPS (2022) 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 534, + 480, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 534, + 480, + 555 + ], + "spans": [ + { + "bbox": [ + 138, + 534, + 480, + 555 + ], + "type": "text", + "content": "53. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019) 10" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 555, + 480, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 555, + 480, + 578 + ], + "spans": [ + { + "bbox": [ + 138, + 555, + 480, + 578 + ], + "type": "text", + "content": "54. Kingma, D., Salimans, T., Poole, B., Ho, J.: Variational diffusion models. In: NeurIPS (2021) 10" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 138, + 578, + 480, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 578, + 480, + 599 + ], + "spans": [ + { + "bbox": [ + 138, + 578, + 480, + 599 + ], + "type": "text", + "content": "55. Kingma, D.P., Gao, R.: Understanding the diffusion objective as a weighted integral of ellb. arXiv (2023) 10" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 138, + 600, + 480, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 600, + 480, + 620 + ], + "spans": [ + { + "bbox": [ + 138, + 600, + 480, + 620 + ], + "type": "text", + "content": "56. Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: The efficient transformer. arXiv (2020) 1" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 138, + 621, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 621, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 138, + 621, + 480, + 643 + ], + "type": "text", + "content": "57. Lee, S., Kim, B., Ye, J.C.: Minimizing trajectory curvature of ode-based generative models. ICML (2023) 4" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 643, + 480, + 665 + ], + "type": "text", + "content": "58. Li, K., Li, X., Wang, Y., He, Y., Wang, Y., Wang, L., Qiao, Y.: Videomamba: State space model for efficient video understanding. ECCV (2024) 3" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 137, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 137, + 116, + 479, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 116, + 479, + 138 + ], + "spans": [ + { + "bbox": [ + 137, + 116, + 479, + 138 + ], + "type": "text", + "content": "59. Li, S., Singh, H., Grover, A.: Mamba-nd: Selective state space modeling for multidimensional data. arXiv (2024) 3, 28, 29" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 137, + 140, + 480, + 161 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 140, + 480, + 161 + ], + "spans": [ + { + "bbox": [ + 137, + 140, + 480, + 161 + ], + "type": "text", + "content": "60. Li, Y., Bornschein, J., Chen, T.: Denoising autoregressive representation learning. arXiv preprint arXiv:2403.05196 (2024) 29" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 137, + 162, + 480, + 195 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 162, + 480, + 195 + ], + "spans": [ + { + "bbox": [ + 137, + 162, + 480, + 195 + ], + "type": "text", + "content": "61. Liang, D., Zhou, X., Wang, X., Zhu, X., Xu, W., Zou, Z., Ye, X., Bai, X.: Pointmamba: A simple state space model for point cloud analysis. arXiv preprint arXiv:2402.10739 (2024) 3, 27, 28" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 137, + 196, + 480, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 196, + 480, + 228 + ], + "spans": [ + { + "bbox": [ + 137, + 196, + 480, + 228 + ], + "type": "text", + "content": "62. Lin, B., Jiang, W., Chen, P., Zhang, Y., Liu, S., Chen, Y.C.: Mtmamba: Enhancing multi-task dense scene understanding by mamba-based decoders. ECCV (2024) 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 137, + 228, + 480, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 228, + 480, + 251 + ], + "spans": [ + { + "bbox": [ + 137, + 228, + 480, + 251 + ], + "type": "text", + "content": "63. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) 30" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 137, + 251, + 480, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 251, + 480, + 274 + ], + "spans": [ + { + "bbox": [ + 137, + 251, + 480, + 274 + ], + "type": "text", + "content": "64. Lipman, Y., Chen, R.T., Ben-Hamu, H., Nickel, M., Le, M.: Flow matching for generative modeling. ICLR (2023) 2, 4" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 137, + 274, + 480, + 295 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 274, + 480, + 295 + ], + "spans": [ + { + "bbox": [ + 137, + 274, + 480, + 295 + ], + "type": "text", + "content": "65. Liu, G.H., Chen, T., So, O., Theodorou, E.: Deep generalized schrödinger bridge. NeurIPS (2022) 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 137, + 296, + 480, + 318 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 296, + 480, + 318 + ], + "spans": [ + { + "bbox": [ + 137, + 296, + 480, + 318 + ], + "type": "text", + "content": "66. Liu, H., Zaharia, M., Abbeel, P.: Ring attention with blockwise transformers for near-infinite context. arXiv (2023) 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 137, + 318, + 480, + 351 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 318, + 480, + 351 + ], + "spans": [ + { + "bbox": [ + 137, + 318, + 480, + 351 + ], + "type": "text", + "content": "67. Liu, J., Yang, H., Zhou, H.Y., Xi, Y., Yu, L., Yu, Y., Liang, Y., Shi, G., Zhang, S., Zheng, H., et al.: Swin-umamba: Mamba-based unet withImagenet-based pretraining. arXiv (2024) 2, 6, 7" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 137, + 352, + 480, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 352, + 480, + 373 + ], + "spans": [ + { + "bbox": [ + 137, + 352, + 480, + 373 + ], + "type": "text", + "content": "68. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv (2022) 4" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 137, + 374, + 480, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 374, + 480, + 396 + ], + "spans": [ + { + "bbox": [ + 137, + 374, + 480, + 396 + ], + "type": "text", + "content": "69. Liu, X., Gong, C., Liu, Q.: Flow straight and fast: Learning to generate and transfer data with rectified flow. ICLR (2023) 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 137, + 397, + 480, + 418 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 397, + 480, + 418 + ], + "spans": [ + { + "bbox": [ + 137, + 397, + 480, + 418 + ], + "type": "text", + "content": "70. Liu, Y., Tian, Y., Zhao, Y., Yu, H., Xie, L., Wang, Y., Ye, Q., Liu, Y.: Vmamba: Visual state space model. arXiv (2024) 2, 3, 5, 6, 7, 13, 14, 28, 29" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 137, + 419, + 480, + 451 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 419, + 480, + 451 + ], + "spans": [ + { + "bbox": [ + 137, + 419, + 480, + 451 + ], + "type": "text", + "content": "71. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: ICCV (2021) 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 137, + 452, + 480, + 473 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 452, + 480, + 473 + ], + "spans": [ + { + "bbox": [ + 137, + 452, + 480, + 473 + ], + "type": "text", + "content": "72. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2019) 11" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 137, + 475, + 480, + 497 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 475, + 480, + 497 + ], + "spans": [ + { + "bbox": [ + 137, + 475, + 480, + 497 + ], + "type": "text", + "content": "73. Ma, J., Li, F., Wang, B.: U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv (2024) 2, 3, 28" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 137, + 498, + 480, + 530 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 498, + 480, + 530 + ], + "spans": [ + { + "bbox": [ + 137, + 498, + 480, + 530 + ], + "type": "text", + "content": "74. Ma, N., Goldstein, M., Albergo, M.S., Boffi, N.M., Vanden-Eijnden, E., Xie, S.: Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. arXiv (2024) 2, 4" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 137, + 531, + 480, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 531, + 480, + 553 + ], + "spans": [ + { + "bbox": [ + 137, + 531, + 480, + 553 + ], + "type": "text", + "content": "75. McKenna, D.M.: Hilbert curves: Outside-in and inside-gone. Mathemaesthetics, Inc (2019) 7, 26" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 137, + 554, + 480, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 554, + 480, + 575 + ], + "spans": [ + { + "bbox": [ + 137, + 554, + 480, + 575 + ], + "type": "text", + "content": "76. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: ECCV (2016) 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 137, + 576, + 480, + 609 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 576, + 480, + 609 + ], + "spans": [ + { + "bbox": [ + 137, + 576, + 480, + 609 + ], + "type": "text", + "content": "77. Nguyen, E., Goel, K., Gu, A., Downs, G., Shah, P., Dao, T., Baccus, S., Ré, C.: S4nd: Modeling images and videos as multidimensional signals with state spaces. NeurIPS (2022) 3, 28, 29" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 137, + 609, + 455, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 609, + 455, + 620 + ], + "spans": [ + { + "bbox": [ + 137, + 609, + 455, + 620 + ], + "type": "text", + "content": "78. OpenAI: Sora: Creating video from text (2024), https://openai.com/sora 1, 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 137, + 621, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 621, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 137, + 621, + 480, + 643 + ], + "type": "text", + "content": "79. Park, J., Kim, H.S., Ko, K., Kim, M., Kim, C.: Videomamba: Spatio-temporal selective state space model. ECCV (2024) 3, 12" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 137, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 137, + 643, + 480, + 665 + ], + "type": "text", + "content": "80. Peebles, W., Xie, S.: Scalable diffusion models with transformers. arXiv (2022) 1, 3, 5, 12, 23" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 138, + 116, + 481, + 665 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 138, + 116, + 481, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 116, + 481, + 160 + ], + "spans": [ + { + "bbox": [ + 138, + 116, + 481, + 160 + ], + "type": "text", + "content": "81. Peng, B., Goldstein, D., Anthony, Q., Albalak, A., Alcaide, E., Biderman, S., Cheah, E., Ferdinan, T., Hou, H., Kazienko, P., et al.: Eagle and finch: Rwkv with matrix-valued states and dynamic recurrence. arXiv preprint arXiv:2404.05892 (2024) 22" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 161, + 481, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 161, + 481, + 182 + ], + "spans": [ + { + "bbox": [ + 138, + 161, + 481, + 182 + ], + "type": "text", + "content": "82. Qin, Z., Yang, S., Sun, W., Shen, X., Li, D., Sun, W., Zhong, Y.: Hgrn2: Gated linear rnns with state expansion. arXiv preprint arXiv:2404.07904 (2024) 22" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 138, + 183, + 481, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 183, + 481, + 215 + ], + "spans": [ + { + "bbox": [ + 138, + 183, + 481, + 215 + ], + "type": "text", + "content": "83. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: ICML (2021) 30" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 138, + 216, + 481, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 216, + 481, + 237 + ], + "spans": [ + { + "bbox": [ + 138, + 216, + 481, + 237 + ], + "type": "text", + "content": "84. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR (2022) 1, 3, 30" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 138, + 237, + 481, + 259 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 237, + 481, + 259 + ], + "spans": [ + { + "bbox": [ + 138, + 237, + 481, + 259 + ], + "type": "text", + "content": "85. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI (2015) 6" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 138, + 259, + 481, + 281 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 259, + 481, + 281 + ], + "spans": [ + { + "bbox": [ + 138, + 259, + 481, + 281 + ], + "type": "text", + "content": "86. Ruan, J., Xiang, S.: Vm-unet: Vision mamba unet for medical image segmentation. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 138, + 281, + 481, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 281, + 481, + 303 + ], + "spans": [ + { + "bbox": [ + 138, + 281, + 481, + 303 + ], + "type": "text", + "content": "87. Skorokhodov, I., Sotnikov, G., Elhoseiny, M.: Aligning latent and image spaces to connect the unconnectable. In: ICCV (2021) 34" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 138, + 304, + 481, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 304, + 481, + 325 + ], + "spans": [ + { + "bbox": [ + 138, + 304, + 481, + 325 + ], + "type": "text", + "content": "88. Smith, J.T., Warrington, A., Linderman, S.W.: Simplified state space layers for sequence modeling. arXiv (2022) 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 138, + 325, + 481, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 325, + 481, + 346 + ], + "spans": [ + { + "bbox": [ + 138, + 325, + 481, + 346 + ], + "type": "text", + "content": "89. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: ICML (2015) 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 138, + 347, + 481, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 347, + 481, + 369 + ], + "spans": [ + { + "bbox": [ + 138, + 347, + 481, + 369 + ], + "type": "text", + "content": "90. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. arXiv (2019) 4" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 138, + 369, + 481, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 369, + 481, + 402 + ], + "spans": [ + { + "bbox": [ + 138, + 369, + 481, + 402 + ], + "type": "text", + "content": "91. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: ICLR (2021) 2, 4, 9, 10" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 138, + 402, + 481, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 402, + 481, + 445 + ], + "spans": [ + { + "bbox": [ + 138, + 402, + 481, + 445 + ], + "type": "text", + "content": "92. Stein, G., Cresswell, J., Hosseinzadeh, R., Sui, Y., Ross, B., Villecloze, V., Liu, Z., Caterini, A.L., Taylor, E., Loaiza-Ganem, G.: Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. NeurIPS (2023) 29" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 138, + 446, + 481, + 466 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 446, + 481, + 466 + ], + "spans": [ + { + "bbox": [ + 138, + 446, + 481, + 466 + ], + "type": "text", + "content": "93. Sun, Z., Yang, Y., Yoo, S.: Sparse attention with learning to hash. In: ICLR (2021) 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 138, + 468, + 481, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 468, + 481, + 500 + ], + "spans": [ + { + "bbox": [ + 138, + 468, + 481, + 500 + ], + "type": "text", + "content": "94. Tang, R., Liu, L., Pandey, A., Jiang, Z., Yang, G., Kumar, K., Stenetorp, P., Lin, J., Ture, F.: What the daam: Interpreting stable diffusion using cross attention. arXiv (2022) 8" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 138, + 501, + 481, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 501, + 481, + 522 + ], + "spans": [ + { + "bbox": [ + 138, + 501, + 481, + 522 + ], + "type": "text", + "content": "95. Tikochinski, R., Goldstein, A., Meiri, Y., Hasson, U., Reichart, R.: An incremental large language model for long text processing in the brain (2024) 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "spans": [ + { + "bbox": [ + 138, + 522, + 481, + 555 + ], + "type": "text", + "content": "96. Tong, A., Malkin, N., Fatras, K., Atanackovic, L., Zhang, Y., Huguet, G., Wolf, G., Bengio, Y.: Simulation-free schr\\'' odinger bridges via score and flow matching. arXiv (2023) 9" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 138, + 555, + 481, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 555, + 481, + 578 + ], + "spans": [ + { + "bbox": [ + 138, + 555, + 481, + 578 + ], + "type": "text", + "content": "97. Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., Gelly, S.: Fvd: A new metric for video generation. ICLR Workshop (2019) 30" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 578, + 481, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 578, + 481, + 599 + ], + "spans": [ + { + "bbox": [ + 138, + 578, + 481, + 599 + ], + "type": "text", + "content": "98. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 27" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 138, + 600, + 481, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 600, + 481, + 621 + ], + "spans": [ + { + "bbox": [ + 138, + 600, + 481, + 621 + ], + "type": "text", + "content": "99. Wang, C., Tsepa, O., Ma, J., Wang, B.: Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv (2024) 28" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 138, + 621, + 481, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 621, + 481, + 643 + ], + "spans": [ + { + "bbox": [ + 138, + 621, + 481, + 643 + ], + "type": "text", + "content": "00. Wang, J., Gangavarapu, T., Yan, J.N., Rush, A.M.: Mambabyte: Token-free selective state space model. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 138, + 643, + 481, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 643, + 481, + 665 + ], + "spans": [ + { + "bbox": [ + 138, + 643, + 481, + 665 + ], + "type": "text", + "content": "01. Wang, J., Yan, J.N., Gu, A., Rush, A.M.: Pretraining without attention. arXiv (2022) 6" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 101 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 481, + 100 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 116, + 480, + 665 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 133, + 116, + 480, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 116, + 480, + 138 + ], + "spans": [ + { + "bbox": [ + 133, + 116, + 480, + 138 + ], + "type": "text", + "content": "102. Wang, S., Li, Q.: Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization. arXiv (2023) 2, 28" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 133, + 139, + 480, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 139, + 480, + 159 + ], + "spans": [ + { + "bbox": [ + 133, + 139, + 480, + 159 + ], + "type": "text", + "content": "103. Wang, S., Xue, B.: State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory. NeurIPS (2024) 2, 28" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 133, + 160, + 480, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 160, + 480, + 191 + ], + "spans": [ + { + "bbox": [ + 133, + 160, + 480, + 191 + ], + "type": "text", + "content": "104. Wang, W., Ma, S., Xu, H., Usuyama, N., Ding, J., Poon, H., Wei, F.: When an image is worth 1,024 x 1,024 words: A case study in computational pathology. arXiv (2023) 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 133, + 192, + 480, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 192, + 480, + 235 + ], + "spans": [ + { + "bbox": [ + 133, + 192, + 480, + 235 + ], + "type": "text", + "content": "105. Wang, X., Wang, S., Ding, Y., Li, Y., Wu, W., Rong, Y., Kong, W., Huang, J., Li, S., Yang, H., Wang, Z., Jiang, B., Li, C., Wang, Y., Tian, Y., Tang, J.: State space model for new-generation network alternative to transformers: A survey (2024) 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 133, + 236, + 480, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 236, + 480, + 257 + ], + "spans": [ + { + "bbox": [ + 133, + 236, + 480, + 257 + ], + "type": "text", + "content": "106. Wang, X., Kang, Z., Mu, Y.: Text-controlled motion mamba: Text-instructed temporal grounding of human motion. arXiv preprint arXiv:2404.11375 (2024) 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 133, + 258, + 480, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 258, + 480, + 289 + ], + "spans": [ + { + "bbox": [ + 133, + 258, + 480, + 289 + ], + "type": "text", + "content": "107. Wang, Z., Ma, C.: Semi-mamba-unet: Pixel-level contrastive cross-supervised visual mamba-based unet for semi-supervised medical image segmentation. arXiv (2024) 28" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 133, + 289, + 480, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 289, + 480, + 311 + ], + "spans": [ + { + "bbox": [ + 133, + 289, + 480, + 311 + ], + "type": "text", + "content": "108. Wang, Z., Zheng, J.Q., Zhang, Y., Cui, G., Li, L.: Mamba-unet: Unet-like pure visual mamba for medical image segmentation. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 133, + 312, + 480, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 312, + 480, + 342 + ], + "spans": [ + { + "bbox": [ + 133, + 312, + 480, + 342 + ], + "type": "text", + "content": "109. Wu, L., Wang, D., Gong, C., Liu, X., Xiong, Y., Ranjan, R., Krishnamoorthi, R., Chandra, V., Liu, Q.: Fast point cloud generation with straight flows. In: CVPR (2023) 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 133, + 343, + 480, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 343, + 480, + 364 + ], + "spans": [ + { + "bbox": [ + 133, + 343, + 480, + 364 + ], + "type": "text", + "content": "110. Xia, W., Yang, Y., Xue, J.H., Wu, B.: Tedigan: Text-guided diverse face image generation and manipulation. In: CVPR (2021) 10, 30" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 133, + 365, + 480, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 365, + 480, + 385 + ], + "spans": [ + { + "bbox": [ + 133, + 365, + 480, + 385 + ], + "type": "text", + "content": "111. Xing, Z., Ye, T., Yang, Y., Liu, G., Zhu, L.: Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 133, + 386, + 480, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 386, + 480, + 406 + ], + "spans": [ + { + "bbox": [ + 133, + 386, + 480, + 406 + ], + "type": "text", + "content": "112. Yan, J.N., Gu, J., Rush, A.M.: Diffusion models without attention. arXiv (2023) 4, 6" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 133, + 407, + 480, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 407, + 480, + 428 + ], + "spans": [ + { + "bbox": [ + 133, + 407, + 480, + 428 + ], + "type": "text", + "content": "113. Yang, S., Wang, B., Shen, Y., Panda, R., Kim, Y.: Gated linear attention transformers with hardware-efficient training. ICML (2024) 22" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 133, + 429, + 480, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 429, + 480, + 460 + ], + "spans": [ + { + "bbox": [ + 133, + 429, + 480, + 460 + ], + "type": "text", + "content": "114. Yang, S., Zhang, Y.: Fla: A triton-based library for hardware-efficient implementations of linear attention mechanism (Jan 2024), https://github.com/sustcsonglin/flashlinear-attention_22" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 133, + 460, + 480, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 460, + 480, + 482 + ], + "spans": [ + { + "bbox": [ + 133, + 460, + 480, + 482 + ], + "type": "text", + "content": "115. Yang, Y., Xing, Z., Zhu, L.: Vivim: a video vision mamba for medical video object segmentation. arXiv (2024) 6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 133, + 483, + 480, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 483, + 480, + 514 + ], + "spans": [ + { + "bbox": [ + 133, + 483, + 480, + 514 + ], + "type": "text", + "content": "116. Yu, A., Nigmatov, A., Morozov, D., Mahoney, M.W., Erichson, N.B.: Robustifying state-space models for long sequences via approximate diagonalization. arXiv (2023) 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 133, + 515, + 480, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 515, + 480, + 536 + ], + "spans": [ + { + "bbox": [ + 133, + 515, + 480, + 536 + ], + "type": "text", + "content": "117. Yu, S., Sohn, K., Kim, S., Shin, J.: Video probabilistic diffusion models in projected latent space. In: CVPR (2023) 30" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 133, + 536, + 480, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 536, + 480, + 557 + ], + "spans": [ + { + "bbox": [ + 133, + 536, + 480, + 557 + ], + "type": "text", + "content": "118. Zhang, T., Li, X., Yuan, H., Ji, S., Yan, S.: Point could mamba: Point cloud learning via state space model. arXiv (2024) 28" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 133, + 558, + 480, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 558, + 480, + 578 + ], + "spans": [ + { + "bbox": [ + 133, + 558, + 480, + 578 + ], + "type": "text", + "content": "119. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: CVPR (2018) 29" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 133, + 579, + 480, + 610 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 579, + 480, + 610 + ], + "spans": [ + { + "bbox": [ + 133, + 579, + 480, + 610 + ], + "type": "text", + "content": "120. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. ECCV (2024) 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 133, + 611, + 480, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 611, + 480, + 643 + ], + "spans": [ + { + "bbox": [ + 133, + 611, + 480, + 643 + ], + "type": "text", + "content": "121. Zhang, Z., Liu, A., Reid, I., Hartley, R., Zhuang, B., Tang, H.: Motion mamba: Efficient and long sequence motion generation with hierarchical and bidirectional selective ssm. arXiv (2024) 28" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 133, + 643, + 480, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 643, + 480, + 665 + ], + "spans": [ + { + "bbox": [ + 133, + 643, + 480, + 665 + ], + "type": "text", + "content": "122. Zheng, Z., Wu, C.: U-shaped vision mamba for single image dehazing. arXiv (2024) 3, 28" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "spans": [ + { + "bbox": [ + 133, + 91, + 144, + 100 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "spans": [ + { + "bbox": [ + 166, + 91, + 203, + 100 + ], + "type": "text", + "content": "Hu et al." + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 116, + 482, + 170 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 132, + 116, + 482, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 116, + 482, + 149 + ], + "spans": [ + { + "bbox": [ + 132, + 116, + 482, + 149 + ], + "type": "text", + "content": "123. Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., Wang, X.: Vision mamba: Efficient visual representation learning with bidirectional state space model. ICML (2024) 2, 3, 5, 7, 11, 13, 14, 28" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 150, + 482, + 170 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 150, + 482, + 170 + ], + "spans": [ + { + "bbox": [ + 132, + 150, + 482, + 170 + ], + "type": "text", + "content": "124. zhuzilin: Ring flash attention. https://github.com/zhuzilin/ring-flash-attention (2024) 2" + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 419, + 91, + 447, + 102 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 419, + 91, + 447, + 102 + ], + "spans": [ + { + "bbox": [ + 419, + 91, + 447, + 102 + ], + "type": "text", + "content": "ZigMa" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "spans": [ + { + "bbox": [ + 470, + 91, + 479, + 100 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2024/ZipLoRA_ Any Subject in Any Style by Effectively Merging LoRAs/c9a0f3a4-ef1d-4bd3-99ed-57c2d35f2218_content_list.json b/2024/ZipLoRA_ Any Subject in Any Style by Effectively Merging LoRAs/c9a0f3a4-ef1d-4bd3-99ed-57c2d35f2218_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..91132b9a8f2aefbc1abfc115c2690b169fd73b5c --- /dev/null +++ b/2024/ZipLoRA_ Any Subject in Any Style by Effectively Merging LoRAs/c9a0f3a4-ef1d-4bd3-99ed-57c2d35f2218_content_list.json @@ -0,0 +1,1719 @@ +[ + { + "type": "text", + "text": "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs", + "text_level": 1, + "bbox": [ + 276, + 140, + 727, + 186 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Viraj Shah $^{1,2}$ , Nataniel Ruiz $^{1}$ , Forrester Cole $^{1}$ , Erika Lu $^{1}$ , Svetlana Lazebnik $^{2}$ , Yuanzhen Li $^{1}$ , and Varun Jampani $^{1}$", + "bbox": [ + 217, + 210, + 782, + 244 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Google Research $^{2}$ UIUC", + "bbox": [ + 436, + 253, + 563, + 281 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/26591bd78b1649606414851ab13c256a5dc14ca88fb0f3197f8211276bb2effd.jpg", + "image_caption": [ + "Fig. 1: By effectively merging independently trained style and content LoRAs, our proposed method ZipLoRA is able to generate any user-provided subject in any user-provided style, providing unprecedented control over personalized creations using diffusion models." + ], + "image_footnote": [], + "bbox": [ + 246, + 313, + 483, + 525 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/0ce71d2783f64ee616d8d8127899f10de7d3506701062c14adf1239463438b2d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 490, + 315, + 756, + 526 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract. Methods for finetuning generative models for concept-driven personalization generally achieve strong results for subject-driven or style-driven generation. Recently, low-rank adaptations (LoRA) have been proposed as a parameter-efficient way of achieving concept-driven personalization. While recent work explores the combination of separate LoRAs to achieve joint generation of learned styles and subjects, existing techniques do not reliably address the problem, so that either subject fidelity or style fidelity are compromised. We propose ZipLoRA, a method to cheaply and effectively merge independently trained style and subject LoRAs in order to achieve generation of any user-provided subject in any user-provided style. Experiments on a wide range of subject and style combinations show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity while preserving the ability to recontextualize.", + "bbox": [ + 259, + 618, + 743, + 813 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Image Stylization $\\cdot$ Diffusion Models $\\cdot$ LoRA Models", + "bbox": [ + 261, + 825, + 700, + 839 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 217, + 143, + 374, + 160 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recently, diffusion models [13, 30, 36] have allowed for impressive image generation quality with their excellent understanding of diverse artistic concepts and enhanced controllability due to multi-modal conditioning support (with text being the most popular mode). The usability and flexibility of generative models has further progressed with a wide variety of personalization approaches, such as DreamBooth [31] and StyleDrop [35]. These approaches fine-tune a base diffusion model on the images of a specific concept to produce novel renditions in various contexts. Such concepts can be a specific object, person, or artistic style.", + "bbox": [ + 212, + 175, + 785, + 296 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "While personalization methods have been used for subjects and styles independently, a key unsolved problem is to generate a specific user-provided subject in a specific user-provided style. For example, an artist may wish to render a specific person in their personal style, learned through examples of their own work. A user may wish to generate images of their child's favorite plush toy, in the style of the child's watercolor paintings. Moreover, if this is achieved two problems are simultaneously solved: (1) the task of representing any given subject in any style, and (2) the problem of controlling diffusion models through images rather than text, which can be imprecise and unsuitable for certain generation tasks. Finally, we can imagine a large-scale application of such a tool, where a bank of independently learned styles and subjects are shared and stored online. The task of arbitrarily rendering any subject in any style is an open research problem that we seek to address.", + "bbox": [ + 212, + 296, + 787, + 492 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "A pitfall of recent personalization methods is that many finetune all of the parameters of a large base model, which can be costly. Parameter Efficient Fine-Tuning (PEFT) approaches allow for fine-tuning models for concept-driven personalization with much lower memory and storage budgets. Among the various PEFT approaches, Low Rank Adaptation (LoRA) [14] has emerged as a favored method for researchers and practitioners alike due to its versatility. LoRA learns low-rank factorized weight matrices for the attention layers (these learned weights are themselves commonly referred to as \"LoRAs\"). By combining LoRA and algorithms such as DreamBooth [31], the learned subject-specific LoRA weights enable the model to generate the subject with semantic variations.", + "bbox": [ + 212, + 492, + 787, + 643 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "With the growing popularity of LoRA personalization, there have been attempts to merge LoRA weights, specifically by performing a linear combination of subject and style LoRAs, with variable coefficients [32]. This allows for a control over the \"strength\" of each LoRA, and users sometimes are able, through careful grid search and subjective human evaluation, to find a combination that allows for accurate portrayal of the subject under the specific style. This method lacks robustness across style and subject combinations, and is also incredibly time consuming.", + "bbox": [ + 212, + 643, + 787, + 763 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we propose ZipLoRA, a simple yet effective method to generate any subject in any style by cheaply merging independently trained LoRAs for subject and style. Note that since we aim to achieve custom stylization of a given subject, we focus specifically on merging two LoRAs (one for subject and one for style). Our approach works consistently on a wide variety of subject", + "bbox": [ + 212, + 763, + 787, + 840 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "V. Shah et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "and style LoRAs without enforcing any restriction on the way these are trained. This allows users and artists to easily combine publicly available subject and style LoRAs of their choice. ZipLoRA is hyperparameter-free, i.e. it does not require manual tuning of any hyperparameters or merger weights.", + "bbox": [ + 212, + 146, + 782, + 205 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our approach is based on two important observations. (1) LoRA weights for different layers $\\Delta W_{i}$ (where $i$ denotes the layer) are sparse. i.e., most of the elements in $\\Delta W_{i}$ have very small magnitude, and have little effect on generation quality and fidelity. (2) Columns of the weight matrices of two independently trained LoRAs may have varying levels of \"alignment\" between each other, as measured by cosine similarity, for example. We find that directly summing columns that are highly aligned degrades performance of the merged model.", + "bbox": [ + 212, + 207, + 784, + 311 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Based on these observations, we hypothesize that a method that operates akin to a zipper, aiming to reduce the quantity of similar-direction sums while preserving the content and style generation properties of the original LoRAs will yield more robust, higher-quality merges. Much like a zipper seamlessly joins two sides of a fabric, our proposed optimization-based approach finds a disjoint set of merger coefficients for blending the subject and style LoRAs, ensuring that the merge adeptly captures both subject and style. Our optimization process is lightweight and significantly improves the merging performance on challenging content-style combinations, where the two LoRAs are highly aligned.", + "bbox": [ + 212, + 313, + 784, + 446 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While our approach is independent of the model architecture, we further observe that the recently released Stable Diffusion XL (SDXL) model [29] exhibits strong style learning properties, comparable to results shown by StyleDrop [35] on Muse [2]. Specifically, unlike previous versions of Stable Diffusion [30], SDXL is able to learn styles using just a single exemplar image by following a Dream-Booth protocol [31] without any human feedback. This property makes our method particularly effective when applied to SDXL. We summarize our contributions as follows:", + "bbox": [ + 212, + 449, + 784, + 568 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We demonstrate some key observations about current text-to-image diffusion models and personalization methods, particularly in relation to style personalization. We further examine the sparsity of concept-personalized LoRA weight matrix coefficients and the prevalence and deleterious effect of highly aligned columns for LoRA matrices.", + "bbox": [ + 225, + 577, + 782, + 650 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- Using these insights we propose ZipLoRA, a simple optimization method that allows for effective merging of independently trained style and subject LoRAs to allow for the generation of any subject in any style.", + "bbox": [ + 225, + 652, + 782, + 696 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "- We demonstrate the effectiveness of our approach on a variety of image stylization tasks, including content-style transfer and recontextualization. We also demonstrate that ZipLoRA outperforms existing methods of merging LoRAs as well as other baseline approaches.", + "bbox": [ + 225, + 696, + 782, + 753 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 215, + 779, + 387, + 794 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Image Stylization. Image-based style transfer is an area of research dating back at least 20 years [5, 12]. Great advances in arbitrary style transfer was", + "bbox": [ + 212, + 809, + 782, + 839 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs", + "bbox": [ + 282, + 114, + 732, + 130 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "achieved by the convolutional neural network-based approaches [9,15,17,24,28]. Generative models such as GANs [18-20] can also be used as a prior for image stylization tasks [1,26,37]. Many recent GAN-based approaches achieve successful one-shot stylizations [3,7,23,25,27,34,38,40-42] by fine-tuning a pre-trained GAN for a given reference style. However, these methods are limited to images from only a single domain (such as faces). Further, most existing GANs do not provide any direct, text-based control over the semantics of the output, thus they cannot produce the reference subject in novel contexts. Methods such as [8,16,22] attempt to modulate the style of the content image using the text description, however, they do not support a style reference image like our approach, and do not provide re-contextualization capability. Compared to older generative models, diffusion models [13,30,36] offer superior generation quality and text-based control; however, to date, it has been difficult to use them for one-shot stylization driven by image examples. Ours is one of the first works demonstrating the use of diffusion models for high-quality example-based stylization combined with an ability to re-contextualize to diverse scenarios.", + "bbox": [ + 212, + 146, + 787, + 388 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Fine-tuning of Diffusion Models for Custom Generation. In the evolving field of text-to-image (T2I) model personalization, recent studies have introduced various methods to fine-tune large-scale T2I diffusion models for depicting specific subjects based on textual descriptions. Techniques like Textual Inversion [6] focus on learning text embeddings, while DreamBooth [31] fine-tunes the entire T2I model for better subject representation. Later methods aim to optimize specific parts of the networks [11, 21]. Additionally, techniques like LoRA [14] and StyleDrop [35] concentrate on optimizing low-rank approximations and a small subset of weights, respectively, for style personalization. DreamArtist [4] introduces a novel one-shot personalization method using a positive-negative prompt tuning strategy. While these fine-tuning approaches yield high-quality results, they typically are limited to learning only one concept (either subject or style). One exception is Custom Diffusion [21], which attempts to learn multiple concepts simultaneously. However, Custom Diffusion requires expensive joint training from scratch and still yields inferior results when used for stylization as it fails to disentangle the style from the subject.", + "bbox": [ + 212, + 395, + 787, + 636 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Combining LoRAs. Combining different LoRAs remain under-explored in the literature particularly from the point of view of fusing style and the subject concepts. Ryu [32] shows a method to combine independently trained LoRAs by weighed arithmetic summation. In [10], authors discuss fusing multiple concept LoRAs using gradient fusion strategy, however, it is an expensive method that requires retraining the entire model. Further, since it uses a custom LoRA variant referred to as ED-LoRA, it lacks the flexibility to combine freely available pretrained LoRAs. It also relies on regional prompting that uses different prompts for different regions of the image - a trick that is unsuitable for subject-style merge since the style cannot be localized to any one location in the image. A concurrent work discusses a strategy to obtain Mixture of Experts by combining multiple LoRAs using a gating function [39]. However, it focuses only on the ability to generate the individual concepts separately, and does not consider the", + "bbox": [ + 212, + 643, + 787, + 840 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "V. Shah et al.", + "bbox": [ + 271, + 114, + 364, + 126 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/e5b4533ed33a993bb6c566c3c88ea5fcb559c664eda536affa785ecd7293ab65.jpg", + "image_caption": [ + "Fig. 2: Overview of ZipLoRA. Our method learns mixing coefficients for each column of $\\Delta W_{i}$ for both style and subject LoRAs. It does so by (1) minimizing the difference between subject/style images generated by the mixed LoRA and original subject/style LoRA models, while (2) minimizing the cosine similarity between the columns of content and style LoRAs. In essence, the zipped LoRA tries to conserve the subject and style properties of each individual LoRA, while minimizing signal interference of both LoRAs." + ], + "image_footnote": [], + "bbox": [ + 218, + 143, + 785, + 270 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "problem of combined generation, i.e. generating multiple different concepts (such as object and style) together in a single image.", + "bbox": [ + 212, + 412, + 784, + 444 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 Methods", + "text_level": 1, + "bbox": [ + 215, + 474, + 339, + 491 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Background", + "text_level": 1, + "bbox": [ + 215, + 516, + 362, + 532 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Diffusion Models [13, 30, 36] are state-of-the-art generative models known for their high-quality, photorealistic image synthesis. Their training comprises two phases: a forward process, where an image transitions into a Gaussian noise through incremental Gaussian noise addition, and a reverse process, reconstructing the original data from the noise. The reverse process is typically learnt using an U-net with text conditioning support enabling text-to-image generation at the time of inference. In our work, we focus on widely used latent diffusion model [30] which learns the diffusion process in the latent space instead of image space. In particular, we use Stable Diffusion XL v1 [29] for all our experiments.", + "bbox": [ + 212, + 550, + 787, + 686 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "LoRA Fine-tuning. LoRA (Low-Rank Adaptation) is a method for efficient adaptation of Large Language and Vision Models to a new downstream task [14, 32]. The key concept of LoRA is that the weight updates $\\Delta W$ to the base model weights $W_0 \\in \\mathbb{R}^{m \\times n}$ during fine-tuning have a \"low intrinsic rank,\" thus the update $\\Delta W$ can be decomposed into two low-rank matrices $B \\in \\mathbb{R}^{m \\times r}$ and $A \\in \\mathbb{R}^{r \\times n}$ for efficient parameterization with $\\Delta W = BA$ . Here, $r$ represents the intrinsic rank of $\\Delta W$ with $r << \\min(m, n)$ . During training, only $A$ and $B$ are updated to find suitable $\\Delta W = BA$ , while keeping $W_0$ constant. For inference, the updated weight matrix $W$ can be obtained as $W = W_0 + BA$ . Due to its efficiency, LoRA is widely used for fine-tuning open-sourced diffusion models.", + "bbox": [ + 212, + 688, + 787, + 840 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs", + "bbox": [ + 282, + 114, + 732, + 130 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Problem Setup", + "text_level": 1, + "bbox": [ + 215, + 146, + 387, + 162 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In this work, we aim to produce accurate renditions of a custom object in a given reference style by merging LoRA weights obtained by separately fine-tuning a given text-to-image diffusion model on a few reference images of the object/style.", + "bbox": [ + 212, + 171, + 782, + 215 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We start with a base diffusion model represented as $D$ with pre-trained weights $W_0^{(i)}$ with $i$ as layer index. One can adapt the base model $D$ to any given concept by simply adding the corresponding set of LoRA weights $L_x\\{\\Delta W_x^{(i)}\\}$ to the model weights. We represent it as: $D_{L_x} = D \\oplus L_x = W_0 + \\Delta W_x$ . We drop the superscript $(i)$ for simplicity since our operations are applied over all the LoRA-enabled weight matrices of our base model $D$ .", + "bbox": [ + 212, + 215, + 784, + 310 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We are given two independently trained set of LoRAs $L_{c} = \\{\\Delta W_{c}^{(i)}\\}$ and $L_{s} = \\{\\Delta W_{s}^{(i)}\\}$ for our base model $D$ , and we aim to find a merged LoRA $L_{m} = \\{\\Delta W_{m}^{(i)}\\} = \\mathrm{Merge}(L_{c},L_{s})$ that can combine the effects of both the individual LoRAs in order to stylize the given object in a desired reference style.", + "bbox": [ + 215, + 311, + 785, + 378 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Direct Merge. LoRA is popularly used as a plug-and-play module on top of the base model, thus a most common way to combine multiple LoRAs is a simple linear combination [32]:", + "bbox": [ + 215, + 378, + 785, + 422 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\nL _ {m} = L _ {c} + L _ {s} \\Rightarrow \\Delta W _ {m} = w _ {c} \\cdot \\Delta W _ {c} + w _ {s} \\cdot \\Delta W _ {s}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 318, + 435, + 784, + 450 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where $w_{c}$ and $w_{s}$ are coefficients of content and style LoRAs, respectively, which allow for a control over the \"strength\" of each LoRA. For a given subject and style LoRA, one may be able to find a particular combination of $w_{c}$ and $w_{s}$ that allows for accurate stylization through careful grid search and subjective human evaluation, but this method is not robust and very time consuming. To this end, we propose a hyperparameter-free approach that does not require this onerous process.", + "bbox": [ + 212, + 460, + 787, + 568 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3 ZipLoRA", + "text_level": 1, + "bbox": [ + 215, + 588, + 339, + 604 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our approach builds on two interesting insights:", + "bbox": [ + 212, + 613, + 563, + 628 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "(1) LoRA update matrices are sparse. We observe that the update matrices $\\Delta W$ for different LoRA layers are sparse, i.e., most of the elements in $\\Delta W$ have a magnitude very close to zero, and thus have little impact on the output of the fine-tuned model. For each layer, we can sort all the elements by their magnitude and zero out the lowest up to a certain percentile. We depict the distribution of elements of $\\Delta W_{i}^{m\\times n}$ in Fig. 3a, along with samples generated after zeroing out $80\\%$ and $90\\%$ of the lowest-magnitude elements of weight update matrix $\\Delta W$ for all the layers. As can be seen, the model performance is unaffected even when $90\\%$ of the elements are thrown away. This observation follows from the fact that the rank of $\\Delta W$ is very small by design, thus the information contained in most columns of $\\Delta W$ is redundant.", + "bbox": [ + 212, + 628, + 787, + 792 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "(2) Highly aligned LoRA weights merge poorly. Columns of the weight matrices of two independently trained LoRAs may contain information that is not disentangled, i.e., the cosine similarity between them can be non-zero. We", + "bbox": [ + 212, + 795, + 787, + 839 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "V. Shah et al.", + "bbox": [ + 271, + 114, + 364, + 127 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/b07185882965963f08303748f3d974eb1d0284d7ef12d301a4fd3ee4548e1cf6.jpg", + "image_caption": [ + "(a) LoRA weight matrices are sparse.", + "Fig. 3: Key insights of our approach: (a) Most of the elements in $\\Delta W$ have a magnitude very close to zero, and can be conveniently thrown away without affecting the generation quality of the fine-tuned model. (b) When LoRA weight columns are highly aligned, a direct merge obtains subpar results. Instead, our approach minimizes the mean cosine similarity between the columns of the LoRA updates across the layers." + ], + "image_footnote": [], + "bbox": [ + 225, + 143, + 450, + 277 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5da66e7e7409454a0bc0ac65f82f8378ed9e33c63433cfc2ce82fc2bcacf7010.jpg", + "image_caption": [ + "(b) Highly aligned LoRA weights merge poorly." + ], + "image_footnote": [], + "bbox": [ + 472, + 150, + 777, + 276 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "observe that the extent of alignment between the columns of LoRA weights plays a significant role in determining the quality of resulting merge: if we directly add the columns with non-zero cosine similarity to each other, it leads to superimposition of their information about the individual concepts, resulting in the loss of the ability of the merged model to synthesize input concepts accurately. We further observe that such loss of information is avoided when the columns are orthogonal to each other with cosine similarity equal to zero.", + "bbox": [ + 212, + 388, + 784, + 494 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Note that each weight matrix represents a linear transformation defined by its columns, so it is intuitive that the merger would retain the information available in these columns only when the columns that are being added are orthogonal to each other. For most content-style LoRA pairs the cosine similarities are nonzero, resulting in signal interference when they are added directly. In Fig. 3b we show the mean cosine similarity values for each layer of the last U-net block for a particular content-style pair before and after applying ZipLoRA. One can see high non-zero cosine similarity values for the direct merge which results in poor stylization quality. On the other hand, ZipLoRA reduces the similarity values significantly to achieve a superior result.", + "bbox": [ + 212, + 494, + 784, + 645 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To prevent signal interference during the merger, we multiply each column with a learnable coefficient such that the orthogonality between the columns can be achieved. The fact that LoRA updates are sparse allows us to neglect certain columns from each LoRA, thus facilitating the task of minimizing interference. As shown in Fig. 2, we introduce a set of merger coefficient vectors $m_{c}$ and $m_{s}$ for each LoRA layer of the content and style LoRAs, respectively:", + "bbox": [ + 212, + 646, + 784, + 736 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} L _ {m} = \\operatorname {M e r g e} \\left(L _ {c}, L _ {s}, m _ {c}, m _ {s}\\right) \\\\ \\Rightarrow \\Delta W _ {m} = m _ {c} \\otimes \\Delta W _ {c} + m _ {s} \\otimes W _ {s}, \\tag {2} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 372, + 750, + 784, + 782 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\otimes$ represents element-wise multiplication between $\\Delta W$ and broadcasted merger coefficient vector $m$ such that $j^{th}$ column of $\\Delta W$ gets multiplied with $j^{th}$ element of $m$ . The dimensionalities of $m_{c}$ and $m_{s}$ are equal to the number", + "bbox": [ + 212, + 794, + 785, + 840 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs", + "bbox": [ + 282, + 114, + 732, + 128 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 774, + 116, + 784, + 126 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "of columns in corresponding $\\Delta W$ , thus each element of the merger coefficient vector represents the contribution of the corresponding column of the LoRA matrix $\\Delta W$ to the final merge.", + "bbox": [ + 212, + 146, + 782, + 191 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Our ZipLoRA approach has two goals: (1) to minimize the interference between content and style LoRAs, defined by the cosine similarity between the columns of content and style LoRAs while (2) conserving the capability of the merged LoRA to generate the reference subject and style independently by minimizing the difference between subject/style images generated by the mixed LoRA and original subject/style LoRAs. To ensure that the columns that are merged with each other minimize signal interference, our proposed loss seeks to minimize the alignment between the merge vectors $m_{c}$ and $m_{s}$ of each layer. Meanwhile, we wish to ensure that the original behavior of both the style and the content LoRAs is preserved in the merged model. Therefore, as depicted in Fig. 2, we formulate an optimization problem with following loss function:", + "bbox": [ + 212, + 191, + 787, + 358 + ], + "page_idx": 7 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {\\text {m e r g e}} = \\left\\| \\left(D \\oplus L _ {m}\\right) \\left(x _ {c}, p _ {c}\\right) - \\left(D \\oplus L _ {c}\\right) \\left(x _ {c}, p _ {c}\\right) \\right\\| _ {2} \\\\ + \\| \\left(D \\oplus L _ {m}\\right) \\left(x _ {s}, p _ {s}\\right) - \\left(D \\oplus L _ {s}\\right) \\left(x _ {s}, p _ {s}\\right) \\| _ {2} \\\\ + \\lambda \\sum_ {i} | m _ {c} ^ {(i)} \\cdot m _ {s} ^ {(i)} |, \\tag {3} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 326, + 371, + 784, + 440 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "where the merged model $L_{m}$ is calculated using $m_{c}$ and $m_{s}$ as per Eq. 2; $(x_{c}, x_{s})$ and $(p_{c}, p_{s})$ are noisy latents and text conditioning prompts for content and style references respectively, and $\\lambda$ is an appropriate multiplier for the cosine-similarity loss term. Note that the first two terms ensure that the merged model retains the ability to generate individual style and content, while the third term enforces an orthogonality constraint between the columns of the individual LoRA weights. Importantly, we keep the weights of the base model and the individual LoRAs frozen, and update only the merger coefficient vectors. As seen in the next section, such a simple optimization method is effective in producing strong stylization of custom subjects. Further, ZipLoRA requires only 100 gradient updates which is $10 \\times$ lower compared to joint training approaches.", + "bbox": [ + 212, + 450, + 787, + 618 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 215, + 641, + 375, + 657 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Datasets. We choose a diverse set of content images from the DreamBooth dataset [31], which provides 30 image sets each containing 4-5 images of a given subject. Similarly, a diverse set of style reference images is selected from the data provided by authors of StyleDrop [35]. We use only a single image for each style. The attribution and licence information for all the content and style images used are available in the DreamBooth and StyleDrop manuscripts/websites, and we also include them in the supplementary material.", + "bbox": [ + 212, + 672, + 787, + 777 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Experimental Setup. We perform all our experiments using the SDXL v1.0 [29] base model. We use DreamBooth fine-tuning with LoRA of rank 64 for obtaining all the style and content LoRAs. We update the LoRA weights using Adam optimizer for 1000 steps with batch size of 1 and learning rate of 0.00005. We keep", + "bbox": [ + 212, + 779, + 787, + 840 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 217, + 114, + 228, + 126 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "V. Shah et al.", + "bbox": [ + 271, + 114, + 364, + 127 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/87b26f367853f66d6cdec20c630f1cace8d888ba2967df4c627349e4f5d5465a.jpg", + "image_caption": [ + "Style Reference" + ], + "image_footnote": [], + "bbox": [ + 308, + 159, + 362, + 202 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/dc37c694ed2e819fcf732b60f37f76ef14b55ac0ace66d7712d7032e43388542.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 308, + 203, + 362, + 251 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/d1885254451922ad9b7d9c0f0173099b8f731fe00f65a17e69efbd2607a2635b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 308, + 252, + 362, + 300 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/5b8fc3273ea95a4f20ac5216e73074d4c0324212eb951cd3cc35ebf87759a59b.jpg", + "image_caption": [ + "A bicycle in [S]" + ], + "image_footnote": [], + "bbox": [ + 375, + 154, + 428, + 202 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/26bd0ccb61efb0a27d5db88d839a0b89c4aa2e69940bd699963d612ccd0fb3bd.jpg", + "image_caption": [ + "bridge in [S] Style", + "A bird in", + "[S]Style" + ], + "image_footnote": [], + "bbox": [ + 431, + 160, + 482, + 200 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/35bb0aa1bf23d5b5c5e626426945d6b85e6691bb609882dbf2cab284afe1cbd3.jpg", + "image_caption": [ + "Golden gate", + "", + "" + ], + "image_footnote": [], + "bbox": [ + 482, + 160, + 534, + 200 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/1c9bbe3481ad4c4f01cea55edfb01342715a49b92379bec1dd6dc6a6744f610e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 537, + 160, + 588, + 200 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/7cd70edc6466c7de1d42f6a5ce15820784049a54407a446f83e4bf708e52397a.jpg", + "image_caption": [ + "A hat in", + "[S]Style", + "A piano in", + "[S]Style" + ], + "image_footnote": [], + "bbox": [ + 589, + 160, + 640, + 200 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/f2934f4dc0b85c7f68ccf81a6189f85bb42a64770471d2d23fa44e309a755886.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 643, + 160, + 694, + 200 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/515bc10d54cc704473656aba047b377334fa728568ab4ace0568d5515833c5eb.jpg", + "image_caption": [ + "Style Reference", + "matt black sculpture", + "Fig. 4: Style Learning using DreamBooth on SDXL. Top: SDXL model learns to produce stylized outputs when fine-tuned on a single example of a reference style using LoRA with a DreamBooth objective. Bottom: The stylizations produced by fine-tuned SDXL model are superior to those of other models. Note that unlike StyleDrop, SDXL DreamBooth fine-tuning does not require human feedback." + ], + "image_footnote": [], + "bbox": [ + 307, + 319, + 366, + 359 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/56a5471302a3e06d1f0e20a8a0194dc949c01f83121fb1417a1ff24b7237b0fd.jpg", + "image_caption": [ + "[1. An American strain" + ], + "image_footnote": [], + "bbox": [ + 375, + 311, + 428, + 359 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/b9fc1fdcd786173b308f49c81508b293d4715c64349c632f4c8e30a0e3b7b4d1.jpg", + "image_caption": [ + "et cat in the bathtub; 2. An old mas" + ], + "image_footnote": [], + "bbox": [ + 433, + 313, + 504, + 359 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/d1393cb9a0cb6fcbafd92e5c93d76d965eb45c7df588e76e730a8995789967df.jpg", + "image_caption": [ + "with eyeglasses and beard; 3. An old w", + "ard; 3. An old woman w" + ], + "image_footnote": [], + "bbox": [ + 509, + 313, + 562, + 359 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/a30e6c08e4b5638cc5d8d380fff88dfec6123477eaee06d2510db976a66902a3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 563, + 313, + 614, + 359 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/855041286aa0da29cdd88d86168270766a7abdb7a769035580697766097e8d5c.jpg", + "image_caption": [ + "earrings] in [S] style" + ], + "image_footnote": [], + "bbox": [ + 620, + 313, + 691, + 359 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "the text encoders of SDXL frozen during the LoRA fine-tuning. For ZipLoRA, we use $\\lambda = 0.01$ in Eq. 3 for all our experiments, and run the optimization until cosine similarity drops to zero with a maximum number of gradient updates set to 100. We plan to release the implementation of our method in future. To obtain qualitative and quantitative comparisons with existing methods, we use their official open-source implementations except for StyleDrop [35]. Since the official code and the model for StyleDrop is not available publicly, we obtain its results by contacting the authors.", + "bbox": [ + 212, + 464, + 787, + 585 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.1 Style-tuning behavior of SDXL model", + "text_level": 1, + "bbox": [ + 214, + 606, + 573, + 621 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "As discussed in Sec. 3, we observe, surprisingly, that a pre-trained SDXL model exhibits strong style learning when fine-tuned on only one reference style image. We show style-tuning results on SDXL model in Fig. 4. For each reference image, we apply LoRA fine-tuning of SDXL model using DreamBooth objective with LoRA rank $= 64$ . For fine-tuning, we follow a similar prompt formation as provided in StyleDrop: \"an in the