AcademicEval / intro_28K /test_introduction_long_2404.16678v1.json
jiyuuuu's picture
syn
b0f675a
raw
history blame
311 kB
{
"url": "http://arxiv.org/abs/2404.16678v1",
"title": "Multimodal Semantic-Aware Automatic Colorization with Diffusion Prior",
"abstract": "Colorizing grayscale images offers an engaging visual experience. Existing\nautomatic colorization methods often fail to generate satisfactory results due\nto incorrect semantic colors and unsaturated colors. In this work, we propose\nan automatic colorization pipeline to overcome these challenges. We leverage\nthe extraordinary generative ability of the diffusion prior to synthesize color\nwith plausible semantics. To overcome the artifacts introduced by the diffusion\nprior, we apply the luminance conditional guidance. Moreover, we adopt\nmultimodal high-level semantic priors to help the model understand the image\ncontent and deliver saturated colors. Besides, a luminance-aware decoder is\ndesigned to restore details and enhance overall visual quality. The proposed\npipeline synthesizes saturated colors while maintaining plausible semantics.\nExperiments indicate that our proposed method considers both diversity and\nfidelity, surpassing previous methods in terms of perceptual realism and gain\nmost human preference.",
"authors": "Han Wang, Xinning Chai, Yiwen Wang, Yuhong Zhang, Rong Xie, Li Song",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Diffusion AND Model",
"gt": "Automatic colorization synthesizes a colorful and semanti- cally plausible image given a grayscale image. It is a classical computer vision task that has been studied for decades. How- ever, existing automatic colorization methods cannot provide satisfactory solution due to the two main challenges: incorrect semantic colors and unsaturated colors. Aiming to synthesize semantically coherent and percep- tually plausible colors, generative models have been exten- sively incorporated into relevant research. Generative adver- sarial networks (GAN) based [4, 5, 1] and autoregressive- based [6, 2, 7] methods have made notable progress. Al- though the issue of incorrect semantic colors has been par- tially addressed, significant challenges still remain. See the yellow boxes in Figure 1, the semantic errors significantly undermine the visual quality. Recently, Denoising Diffusion Probabilistic Models(DDPM) [8] has demonstrated remark- able performance in the realm of image generation. With its exceptional generation capabilities, superior level of de- tail, and extensive range of variations, DDPM has emerged as a compelling alternative to the GAN. Moreover, the con- trollable generation algorithms based on the diffusion model have achieved impressive performance in various downstream tasks such as T2I [9], image editing [10], super resolu- tion [11], etc. In this work, we leverage the powerful diffu- sion prior to synthesize plausible images that align with real- world common sense. Unfortunately, applying pre-trained diffusion models directly to this pixel-wise conditional task lead to inconsistencies [12] that do not accurately align with the original grayscale input. Therefore, it becomes imperative to provide more effective condition guidance in order to en- sure coherence and fidelity. We align the luminance channel both in the latent and pixel spaces. Specifically, our proposed image-to-image pipeline is fine-tuned based on pre-trained stable diffusion. The pixel-level conditions are injected into the latent space to assist the denoising U-Net in producing latent codes that are more faithful to grayscale images. A luminance-aware decoder is applied to mitigate pixel space distortion. In addition to incorrect semantics, another challenge in this task is unsaturated colors. For example, the oranges in the first two columns of Figure 1 suffer from the unsaturated colors. To moderate the unsaturated colors, priors such as categories [5], bounding boxes [13] and saliency maps [14] have been introduced in relevant research. Based on this in- sight, we adopt multimodal high-level semantic priors to help the model understand the image content and generate vivid colors. To simultaneously generate plausible semantics and vivid colors, multimodal priors, including category, caption, and segmentation, are injected into the generation process in a comprehensive manner. In summary, we propose an automatic colorization pipeline to address the challenges in this task. The contri- butions of this paper are as follows: \u2022 We extend the stable diffusion model to automatic im- age colorization by introducing pixel-level grayscale conditions in the denoising diffusion. The pre-trained diffusion priors are employed to generate vivid and arXiv:2404.16678v1 [cs.CV] 25 Apr 2024 BigColor CT2 ControlNet Ours Fig. 1. We achieve saturated and semantic plausible colorization for grayscale images surpassing the GAN-based(BigColor [1]), transformer-based(CT2 [2]) and diffusion-based(ControlNet [3]) methods. plausible colors. \u2022 We design a high-level semantic injection module to enhance the model\u2019s capability to produce semantically reasonable colors. \u2022 A luminance-aware decoder is designed to mitigate pixel domain distortion and make the reconstruction more faithful to the grayscale input. \u2022 Quantitative and qualitative experiments demonstrate that our proposed colorization pipeline provides high- fidelity, color-diversified colorization for grayscale im- ages with complex content. User study further indi- cates that our pipeline gain more human preference than other state-of-the-art methods.",
"main_content": "Learning-based algorithms have been the mainstream of research on automatic colorization in recent years. Previous methods suffer from unsaturated colors and semantic confusion due to the lack of prior knowledge of color. In order to generate plausible colors, generative models have been applied to automatic colorization tasks, including adversarial generative networks [4, 5, 1] and transformers [6, 2, 7]. Besides, [15] shows that diffusion models are more creative than GAN. DDPM has achieved amazing results in diverse natural image generation. Research based on DDPM has confirmed its ability to handle a variety of downstream tasks, including colorization [16]. To alleviate semantic confusion and synthesize more satisfactory results, priors are introduced into related research, including categories [5], saliency maps [14], bounding boxes [13], etc. 3. METHOD 3.1. Overview A color image ylab, represented in CIELAB color space, contains three channels: lightness channel l and chromatic channels a and b. The automatic colorization aims to recover the chromatic channels from the grayscale image: xgray \u2192\u02c6 ylab. In this work, we propose an automatic colorization pipeline for natural images based on stable diffusion. The pipeline consists of two parts: a variational autoencoder [17] and a denoising U-Net. Explicitly, the VAE is for the transformation between pixel space x \u2208RH\u00d7W \u00d73 and latent space z \u2208Rh\u00d7w\u00d7c. While the denoising U-Net applies DDPM in the latent space to generate an image from Gaussian noise. The framework of our pipeline is shown in Figure 2. First, the VAE encodes grayscale image xgray into latent code zc. Next, the T-step diffusion process generates a clean latent code z0 from Gaussian noise zT under the guidance of image latent zc and high-level semantics. Finally, z0 is reconstructed by a luminance-aware decoder to obtain the color image \u02c6 y. The pixel-level grayscale condition and high-level semantic condition for denoising process are introduced in the latent space, shown in the yellow box in Figure 2. We elaborate on the detailed injections of these conditions in Section 3.2 and Section 3.3, respectively. As for the reconstruction processes, the detailed designs of the luminance-aware decoder are described in Section 3.4. 3.2. Colorization Diffusion Model Large-scale diffusion model has the capability to generate high-resolution images with complex structures. While naive usage of diffusion priors generates serious artifacts, we introduce pixel-level luminance information to provide detailed guidance. Specifically, we use encoded grayscale image zc as control condition to enhance U-Net\u2019s understanding of luminance information in the latent space. To involve the grayscale condition in the entire diffusion process, we simultaneously input the latent code zt generated in the previous time step and the noise-free grayscale latent code zc into the input layer of UNet at each time step t: Denoising U-Net Input \ud835\udc65!\"#$ \u2208\ud835\udc45%\u00d7'\u00d7( Text Encoder EfficientNet BLIP Transfiner Category Caption Labels resize cat Output \ud835\udc66 % \u2208\ud835\udc45%\u00d7'\u00d7) \ud835\udc67\u0302*+( Luminance-aware Decoder Encoder \ud835\udefc, conv \ud835\udc53 -./0 , \ud835\udc53 12 3 \ud835\udc53 * 12 3 \u00d7 \ud835\udc40, \u2208\ud835\udc454\u00d7/\u00d7( \ud835\udc67*+( , \ud835\udc67*+( \ud835\udc67* \ud835\udc675 \u2208\ud835\udc454\u00d7/\u00d76 \ud835\udc67\u03027 \u00d7\ud835\udc47steps Cross Attention Text Embeddings \ud835\udc50* Latent Space Fig. 2. Overview of the proposed automatic colorization pipeline. It combines a semantic prior generator (blue box), a highlevel semantic guided diffusion model(yellow box), and a luminance-aware decoder (orange box). z\u2032 t = conv1\u00d71(concat(zt, zc)) (1) In this way, we take advantage of the powerful generative capabilities of stable diffusion while preserve the grayscale condition. The loss function for our denoising U-Net is defined in a similar way to stable diffusion [18]: L = Ez,zc,c,\u03f5\u223cN (0,1),t[||\u03f5 \u2212\u03f5\u03b8(zt, t, zc, c)||2 2] (2) where z is the encoded color image, zc is the encoded grayscale image, c is the category embedding, \u03f5 is a noise term, t is the time step, \u03f5\u03b8 is the denoising U-Net, zt is the noisy version of z at time step t. 3.3. High-level Semantic Guidance To alleviate semantic confusion and generate vivid colors, we design a high-level semantic guidance module for inference. As shown in Figure 2, the multimodal semantics are generated by the pre-trained semantic generator in the blue box. Afterwards, text and segmentation priors are injected into the inference process through cross attention and segmentation guidance respectively, as shown in the yellow box in Figure 2. Specifically, given the grayscale image xgray, the semantic generator produce the corresponding categories [19], captions [20] and segmentations [21]. The category, caption, and segmentation labels are in textual form, while the segmentation masks are binary masks. For textual priors, the CLIP [22] encoder is employed to generate the text embedding ct. The text embedding guidance is applied in denoising U-Net via cross-attention mechanism. Given the timestep t, the concatenated noisy input zt and the text condition ct, the latent code zt\u22121 is produced by the Colorization Diffusion Model(CDM): zt\u22121 = CDM(zt, t, zc, ct) (3) For segmentation priors, we use the pre-trained transfiner [21] to generate paired segmentation masks M and labels L. For each instance, we first resize the binary mask Mi \u2208RH\u00d7W \u00d71 to align the latent space. The resized mask is represented as M i \u2208Rh\u00d7w\u00d71. Then we use the CDM to yield the corresponding latent code zi t\u22121 of the masked region: zi t\u22121 = CDM(zt, t, zc \u00d7 M i, Li) (4) Finally, we combine the original latent code zt\u22121 and the instances to yield the segment-aware latent code \u02c6 zt\u22121: \u02c6 zt\u22121 = i=k X i=1 [zt\u22121 \u00d7 (1 \u2212M i) + zi t\u22121 \u00d7 M i] (5) We set a coefficient i \u2208[0, 1] to control the strength of segmentation guidance. The threshold is defined as Tth = T \u00d7 (1 \u2212i). The segmentation mask is used to guide the synthesis process at inference time step t > Tth. We set i = 0.3 for the experiment. Users have the flexibility to select a different value based on their preferences. 3.4. Luminance-aware Decoder As the downsampling to latent space inevitably lose detailed structures and textures, we apply the luminance condition to InstColor ChromaGAN BigColor ColTran CT2 ControlNet Ours Fig. 3. Qualitative comparisons among InstColor [13], ChromaGAN [5], BigColor [1], ColTran [6], CT2 [2], ControlNet [3] and Ours. More results are provided on https://servuskk.github.io/ColorDiff-Image/. the reconstruction process and propose a luminance-aware decoder. To align the latent space with stable diffusion, we freeze the encoder. The intermediate grayscale features obtained in the encoder are added to the decoder through skip connections. Specifically, the intermediate features f i down generated by the first three downsample layers of the encoder are extracted. These features are convolved, weighted, and finally added to the corresponding upsample layers of the decoder: \u02c6 f j up = f j up + \u03b1i \u00b7 conv(f i down), i = 0, 1, 2; j = 3, 2, 1 (6) We adopt L2 loss L2 and perceptual loss [23] Lp to train the luminance-aware decoder: L = L2 + \u03bbpLp (7) 4. EXPERIMENT 4.1. Implementation We train the denoising U-Net and luminance-aware decoder separately. Firstly, we train the denoising U-Net on the imagenet [24] training set at the resolution of 512 \u00d7 512. We initialize the U-Net using the pre-trained weights of [18]. The learning rate is fixed at 5e \u22125. We use the classifierfree guidance [25] strategy and set the conditioning dropout probability to 0.05. The model is updated for 20K iterations with a batch size of 16. Then we train the luminance-aware decoder on the same dataset and at the same resolution. The VAE is initialized using the pre-trained weights of [18]. We fix the learning rate at 1e\u22124 for 22,500 steps with a batch size of 1. We set the parameter \u03bbp in Eq.(7) to 0.1. Our tests are conducted on the COCO-Stuff [26] val set containing 5,000 images of complex scenes. At inference, we adopt DDIM sampler [27] and set the inference time step T = 50. We conduct all experiments on a single Nvidia GeForce RTX 3090 GPU. 4.2. Comparisons We compare with 6 state-of-the-art automatic colorization methods including 3 types: 1) GAN-based method: InstColor [13], ChromaGAN [5], BigColor [1], 2)Transformerbased method: ColTran [6], CT2 [2], 3) Diffusion-based method: ControlNet [3]. Qualitative Comparison. We show visual comparison results in Figure 3. The images in the first and second rows indicate the ability of the models to synthesise vivid colors. Both GAN-based and transformer-based algorithms suffer from unsaturated colors. Although ControlNet synthesises saturated colors, the marked areas contain significant artifacts. Images in the third and forth rows demonstrate the ability of the models to synthesise semantically reasonable colors. InTable 1. Quantitative comparison results. Method FID\u2193 Colorful\u2191 PSNR\u2191 InstColor [13] 14.40 27.00 23.85 ChromaGAN [5] 27.46 27.06 23.20 BigColor [1] 10.24 39.65 20.86 ColTran [6] 15.06 34.31 22.02 CT2 [2] 25.87 39.64 22.80 ControlNet [3] 10.86 45.09 19.95 Ours 9.799 41.54 21.02 Fig. 4. User evaluations. stColor, ChromaGAN, BigColor, CT2 and ControlNet fail to maintain the color continuity of the same object(discontinuity of colors between the head and tail of the train, hands and shoulders of the girl). While ColTran yields colors that defy common sense (blue shadows and blue hands). In summary, our method provides vivid and semantically reasonable colorization results. User Study. To reflect human preferences, we randomly select 15 images from the COCO-Stuff val set for user study. For each image, the 7 results and ground truth are displayed to the user in a random order. We asked 18 participants to choose their top three favorites. Figure 4 shows the proportion of the Top 1 selected by users. Our method has a vote rate of 22.59%, which significantly outperforms other methods. Quantitative Comparison. We use Fr\u00b4 echet Inception Distance (FID) and colorfulness [28] to evaluate image quality and vividness. These two metrics have recently been used to evaluate the colorization algorithm [1, 29] . Considering that colorization is an ill-posed problem, the ground-truth dependent metric PSNR used in previous works does not accurately reflect the quality of image and color generation [6, 29, 30], and the comparison here is for reference. As shown in Table 1, our proposed method demonstrates superior performance in terms of FID when compared to the state-of-the-art algorithms. Even though ControlNet outperforms our algorithm in the colourful metric, the results shown in the qualitative comparison indicate that the artefacts are meaningless and negatively affect the visual effect of the image. 4.3. Ablation Studies The significance of the main components of the proposed method is discussed in this section. The quantitative and visual comparisons are presented in Table 2 and Figure 5. High-level Semantic Guidance. We discuss the impact of high-level semantic guidance on model performance. The viTable 2. Quantitative comparison of ablation studies. Exp. Luminanceaware decoder High-level guidance FID\u2193 Colorful\u2191 (a) ! 10.05 33.73 (b) ! 9.917 42.55 Ours ! ! 9.799 41.54 w/o semantic ours (a)High-level guidance. w/o luminance ours (b)Luminance-aware decoder. Fig. 5. Visual comparison from ablation studies. suals shown in Figure 5(a) demonstrate our high-level guidance improves saturation of synthesised colors and mitigates failures caused by semantic confusion. The quantitative scores in Table 2 confirm the significant improvement in both color vividness and perceptual quality introduced by the highlevel semantic guidance. Luminance-aware Decoder. The pipeline equipped with a luminance-aware decoder facilitates the generation of cognitively plausible colors. As shown in the first row of Figure 5(b), the artifacts are suppressed. Furthermore, the incorporation of this decoder yields a positive impact on the retrieval of image details, as demonstrated by the successful reconstruction of textual elements in the second row of Figure 5(b). Consequently, the full model outperforms the alternative in terms of FID. It is reported a slight decrease in colorfulness score after incorporating luminance awareness which can be attributed to the suppression of outliers, as discussed in Section 4.2 regarding the analysis of the ControlNet. 5. CONCLUSION In this study, we introduce an novel automatic colorization pipeline that harmoniously combines color diversity with fidelity. It generate plausible and saturated colors by leveraging powerful diffusion priors with the proposed luminance and high-level semantic guidance. Besides, we design a luminance-aware decoder to restore image details and improve color plausibility. Experiments demonstrate that the proposed pipeline outperforms previous methods in terms of perceptual realism and attains the highest human preference compared to other algorithms. 6. ACKNOWLEDGEMENT This work was supported by National Key R&D Project of China(2019YFB1802701), MoE-China Mobile Research Fund Project(MCM20180702), the Fundamental Research Funds for the Central Universities; in part by the 111 project under Grant B07022 and Sheitc No.150633; in part by the Shanghai Key Laboratory of Digital Media Processing and Transmissions. 7.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2404.09117v1",
"title": "Identifying Causal Effects under Kink Setting: Theory and Evidence",
"abstract": "This paper develops a generalized framework for identifying causal impacts in\na reduced-form manner under kinked settings when agents can manipulate their\nchoices around the threshold. The causal estimation using a bunching framework\nwas initially developed by Diamond and Persson (2017) under notched settings.\nMany empirical applications of bunching designs involve kinked settings. We\npropose a model-free causal estimator in kinked settings with sharp bunching\nand then extend to the scenarios with diffuse bunching, misreporting,\noptimization frictions, and heterogeneity. The estimation method is mostly\nnon-parametric and accounts for the interior response under kinked settings.\nApplying the proposed approach, we estimate how medical subsidies affect\noutpatient behaviors in China.",
"authors": "Yi Lu, Jianguo Wang, Huihua Xie",
"published": "2024-04-14",
"updated": "2024-04-14",
"primary_cat": "econ.EM",
"cats": [
"econ.EM"
],
"label": "Original Paper",
"paper_cat": "Diffusion AND Model",
"gt": "In many empirical setups, agents received treatment based on whether their value of a variable (also referred to as the \u201cassignment variable\u201d or \u201crunning variable\u201d in the literature) is above or below a known cutoff. For example, students with test scores above the cutoff are admitted to better schools/colleges (e.g.,Zimmerman [2019], Pop-Eleches and Urquiola [2013]); workers with annual income above the threshold are subject to higher tax rates (Saez [2010]). Such thresholds feature discontinuity in the level of treatment probabilities/choice sets (referred to as \u201cnotches\u201d with level change only hereafter), or discontinuity in the slope of treatment probabilities/choice sets (referred to as \u201ckinks\u201d hereafter), or discontinuity in both the level and the slope of treat- ment probabilities/choice sets (referred to as \u201cnotches\u201d with both level and slope changes here- after). These non-linear designs facilitate treatment effects identification and policy impact evalu- ation. The literature distinguishes between two conceptually different non-linear designs, based on whether agents (e.g., students, workers, patients, firms) can fully manipulate their measures around the cutoff. Specifically, when agents cannot fully manipulate the assignment variable around the threshold, regression discontinuity design (RDD), regression kink design (RKD), and regression probability jump and kink design (RPJKD) are adopted depending on whether agents face the notched or kinked policies. However, when agents can fully manipulate their measure and decide whether to locate above or below the threshold, assumptions in RDD, RKD, and RPJKD are no longer valid. In such scenarios, bunching methods are used to study agents\u2019 behavior (see literature review by Kleven [2016]). Early studies in the bunching literature focus on identifying the key elasticity (e.g., the elas- ticity of taxable income to the net of tax rate), which involves estimating the counterfactual den- sity distribution of the assignment variable (when agents cannot manipulate their measure). Saez [2010] and Chetty et al. [2011] developed the bunching method in kinked settings, while Kleven and Waseem [2013] developed the bunching method in notched settings. The method has been de- ployed in various settings, such as R&D (Chen et al. [2021], ), housing markets (Best and Kleven [2018], Best et al. [2020], Cloyne et al. [2019]). However, fewer studies focus on the impacts of the agents\u2019 manipulation behaviors due to the kinked policy on other outcome variables. Diamond and Persson [2017] proposed the causal estimator in notch settings. By assuming that manipulation only happens within a certain region around the cutoff, they recover the counterfactual density and outcome distributions within the manipulation region, by extrapolating the corresponding distri- 2 butions outside the manipulation region into the manipulation region. The difference between the average observed and counterfactual values within the manipulation region reveals the treatment effect from the agents\u2019 responses. Diamond and Persson (2017) complement the RDD method when agents can fully manipulate the assignment variable. A critical part of Diamond and Persson (2017) is that manipulation only happens within a certain region, which is not true when there is a discontinuity in the slope of treatment probabilities/choice sets. Facing slope changes (i.e., change in marginal incentives), agents to one side of the cutoff would all adjust their assignment variable upwards/downwards, which are denoted as \u201cinterior responses\u201d 1 Therefore, there would not be a manipulation region, which makes Diamond and Persson [2017]\u2019s method invalid under kink settings as well as notch settings with both level and slope changes. In this paper, we develop a framework for estimating the treatment effects of agents\u2019 manipulation behavior due to the kinked policy on other outcome variables2, which complements the RKD method when agents can manip- ulate the assignment variables. Our method is model-free and is based on agents\u2019 interior response behavior which is largely ignored in the previous bunching literature. Our approach is centered around agents\u2019 interior responses under kink settings. Faced with a discontinuity in the slope of choice sets/treatment probabilities (i.e., a change in marginal incen- tives), agents to one side of the threshold would respond to the kinked policy by manipulating their assignment variable value by a constant share (denoted as \u201cshifting agents\u201d), except those who bunch because they are too close to the threshold and face corner solutions (denoted as \u201cbunching agents\u201d). As the marginal bunching agent is also a shifting agent, the ratio of the marginal bunch- ing agent\u2019 initial value and the threshold reveals the constant share for all shifting agents. Given this important feature of shifting agents with interior response, we can recover the counterfactual density distribution non-parametrically, by relocating the agents back to their counterfactual value of the assignment variable using the amount of excess bunching as the moment condition.3 More importantly, this feature also indicates an upward or downward shift in the outcome distribution due to the treatment effect of the kinked policy, which allows us to recover the counterfactual out- come distribution when agents cannot manipulate and hence estimate the average treatment effects 1Chetty et al. [2011] address the interior response issue for the counterfactual density estimation by imposing the integration constraint (i.e., assuming that the number of observations under the observed and counterfactual distributions is the same) and by assuming that the observed density in the interior response part is a parallel shift of the counterfactual one. 2The method could also be applied to notched settings with both level and slope changes upon small modification, as the fundamental issue addressed here is the interior response. 3It involves a computation algorithm, which will be discussed in detail in section 3.1. 3 of the kinked policy on shifting agents and bunching agents. Our methodology could be used more generally to study the treatment effects of a kinked policy under bunching settings with various extensions, including diffused bunching, rounding in agents\u2019 choices, potential misreporting, the existence of stayers (unresponsive agents) due to optimization frictions or inattention, and heterogeneous treatment effects. We apply our treatment effect estimator to study the kinked cost-sharing design under China\u2019s health insurance system. Specifically, the co-payment ratio for rural and urban non-employed pa- tients increased from 50% to 100% when their accumulated annual medical expenses exceeded the policy statutory threshold. This generates a discontinuity in the marginal cost of treatment borne by patients when their annual eligible expenditure is above the given threshold. We found that patients adjust their annual medical expenses and bunch at the threshold. Compared to the coun- terfactual scenario where the co-payment rate is 50%, patients visit the hospital less often when the co-payment rate increases to 100%. This indicates a significant amount of compressed medical demand due to patients\u2019 financial concerns. Our paper is related to three threads of literature. First, we contribute to the literature on treatment effect estimation and policy evaluation using quasi-experimental approaches. The fact that agents can fully determine their value of the assignment variable above or below the threshold indicates that the identifying assumption for RDD or RKD fails. Under the notched design with bunching mass around the cutoff, Carneiro et al. [2015] proposed the \u201cdonut\u201d regression design method (\u201cdonut\u201d RD) by excluding a certain manipulative region around the threshold to solve this issue. The estimation precision of the \u201cdonut\u201d RD estimator depends on how large the excluded region is. Alternatively, Diamond and Persson [2017] proposes a Wald estimator that captures the causal impact of manipulation on the subset of agents that are chosen for manipulation. Their method shares certain similarities with \u201cdonut\u201d RD in the sense that both assume manipulation happens within a certain range around the threshold.4 Our paper contributes to this literature by providing the framework to estimate the average treatment effects of a kinked policy with manip- ulative agents, where the interior response agents lead to the un-neglectful shift of density dis- tribution to one side of the threshold, immediately invalidating the assumption that manipulation happens within a certain region around the cutoff in previous literature. Second, we contribute to the estimation of counterfactual density distribution in bunching 4Under a notched design with discrete change in both the level and the marginal incentives at the threshold, the assumption that manipulation happens within a certain region around the threshold is invalid. 4 estimation. The critical step in bunching estimation relies on estimating the counterfactual density distribution in the counterfactual situation absent of kinks or notches. The standard approach to obtaining such counterfactual was developed by Chetty et al. [2011] in the context of kinks and extended by Kleven and Waseem [2013] to notches. The standard approach is to fit a flexible polynomial to the observed distribution for the region slightly away from the threshold and then extrapolate the fitted distribution to the threshold, under the assumption that the counterfactual distribution is smooth around the threshold. However, agents to one side of the cutoff would adjust their locations in response to the changed marginal incentive, leading to an interior shift of density distribution. This fact results in a biased estimation of the counterfactual distribution if we do not account for the shift in the observed distribution brought by these interior response agents (also known as shifting agents). The magnitude of the interior response depends on the slope of the density distribution and the size of the change in incentives. In general, such interior response effects are larger for kinks than for notches because kinks usually feature a larger change in marginal incentives.5 In kinked designs, Chetty et al. [2011] addresses this interior response issue by assuming that the counterfactual density to one side of the threshold is a constant upwards/downwards movement of the observed density distribution. However, in most setups with changes in marginal incentives, interior response agents would adjust their locations by a constant percentage and hence by differ- ent magnitudes depending on their initial values. Therefore, interior responses would lead to a non- parallel shift of density distribution along the x-axis, which indicates that the commonly adopted method proposed by Chetty et al. [2011] could lead to certain biases. In this paper, we propose an algorithm for counterfactual density estimation that features the exact interior responses. In ad- dition, our proposed estimation method also works under notched settings with different marginal incentives. Third, we contribute to the literature which explores various extensions in the bunching methodology. Some extensions focus on causal identification when there is no discontinuity in the policy but agents\u2019 choices are truncated at 0. For example, the number of hours children spend watching TV has to be above or equal to 0. Caetano [2015] exploits such setups for identifying potential selection in reduced-form estimation. Caetano et al. [2023] propose causal estimators for identifying treatment effects at 0. Other extensions include a two-dimensional bunching approach 5In notched designs, researchers often ignore the interior response and assume manipulation only happens within a certain region around the threshold. This would lead to potentially biased estimates of counterfactual density and elasticity as well. 5 by Cox et al. [2021], and non-identification of elasticity under a single budget set by Blomquist et al. [2021]. we distinguish our paper from these papers in the sense that we focus on different research topics and setups. Specifically, we focus on identifying causal effects under kink settings where agents can manipulate and the cutoff is not at the truncated point. The rest of the paper is arranged as follows. Section II discusses a generalized framework under kinked settings, which covers the basic setup, interior response of shifting agents, and causal effects under sharp bunching and under bunching with diffusion. Section III discusses the estima- tion strategy for estimating the counterfactual density distribution and the counterfactual outcome distribution. Section IV extends the treatment effect estimation to various scenarios, such as re- labelling, rounding, and stayers due to optimization frictions and heterogeneity in the structural parameter. Section V applies the proposed causal estimator to the kinked coinsurance policy in China, where the medical care system, data, bunching pattern, density and outcome distributions, and the treatment effects are presented. Section VI concludes.",
"main_content": "We elaborate a theoretical framework for the causal inference under the kink setting in this section and defer the empirical execution to the next section. Specifically, we lay out the basic setup and derive the optimal interior responses for the complying agents. Then we study the causal inference under the sharp bunching scenario. Finally, we consider the diffusion case for the causal impact under the kink setting. 2.1 Setup Consider a focal kinked policy in which agents face a tax rate (or co-payment rate) of t if their value of z is below a statutory cutoff z\u2217but face a higher marginal tax rate (or co-payment rate) of t +\u2206t if their z > z\u2217. Denote the amount of money that agents pay under the kinked policy as T(z). That is, T(z) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 t ( \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ( \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 t \u00d7z if z \u2264z\u2217 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 t \u00d7z if z \u2264z\u2217 (t +\u2206t)\u00d7z\u2212\u2206tz\u2217 if z > z\u2217 (1) 6 Denote the optimal response function of z from agents maximizing their objective functions as z = z(D,n), where D = 1 indicates that agents face the lower marginal tax/co-payment rate of t and D = 0 indicates agents face the high marginal tax/co-payment rate of t + \u2206t; and n is an unobserved agent heterogeneity, with z(D,n) increasing in n. In general, z(1,n) > z(0,n), i.e., z is higher when the tax rate (co-payment rate) on z is lower on the margin. Agents\u2019 optimal choice z under the kinked policy is given as: z = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z(1,n) if n \u2264nL z\u2217 if n \u2208(nL,nH]. z(0,n) if n > nH (2) Consider a counterfactual linear policy where agents always face the low tax rate (co-payment rate) of t.6 That is, T ct(z) = t \u00d7zct. Consequently, agents\u2019 optimal choices are zct = z(1,n). For agents with n < nL, the optimal z under the kinked policy remains the same as zct in the counterfactual policy as they face the same tax rate (co-payment rate) of t. We denote these agents as \u201calways-takers\u201d. Next, for agents with n > nH, they reduce their z in response to the higher marginal tax rate (co-payment rate) of t +\u2206t under the kinked policy, (z = z(0,n) < z(1,n) = zct), but stay in the interior of the upper bracket, compared to the counterfactual policy. We denote them as \u201cshifters\u201d, or, agents with \u201cinterior response\u201d. Finally, for agents with n \u2208(nL,nH], their optimal choice under the kinked policy is to reduce their z and bunch at the threshold z\u2217. We denote them as \u201cbunchers\u201d, as their behavior produces excess bunching in the density distribution at the kink point z\u2217when the kinked policy is introduced. Remark 1 The literature on bunching has specified the agent\u2019s objective function, in which the optimization leads to a specific function of z(D,\u03c6). For example, Saez (2010) and subsequent studies (e.g., Chetty et al. 2011; Einav, Finkelstein, and Schrimpf 2017; etc.) typically assume a static quasi-linear, iso-elastic preference over consumption and labor supply (or medical spending) to obtain agents\u2019 response elasticity to tax (or coinsurance) kinks. Specifically, Saez (2010) considers a quasi-linear utility function u(c,z) = z \u2212T(z) \u2212 n 1+1/e \u00d7 ( z n)1+1/e, where T(z) is the tax 6Alternatively, one can consider a counterfactual policy where agents always face the high tax rate (co-payment rate) of t +\u2206t. The main conclusions are the same, as shown in Appendix B. 7 system; n denotes the individual heterogeneity in abilities; and e is the labor supply elasticity. The counterfactual scenario is characterized by a linear tax system with T(z) = t \u00d7z, whereas the focal kinked tax policy introduces an increase in the marginal tax rate from t to t + \u2206t at the earnings threshold z\u2217. The optimal labor supply choice can be derived as z = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 n\u00d7(1\u2212t)e if n \u2264 z\u2217 (1\u2212t)e \u2261nL z\u2217 if n \u2208(nL,nH] n\u00d7(1\u2212t \u2212\u2206t)e if n > z\u2217 (1\u2212t\u2212\u2206t)e \u2261nH This optimal choice equation under the kinked policy corresponds to Eq.(4) in Chetty et al. (2011) and Eq.(5) in Einav, Finkelstein, and Schrimpf (2017). To derive the key features of optimal responses, following the above literature, we make a weak assumption of the optimal response function of z: Assumption 1 (Separability). Assume that z(D,n) = f(D|e)g(n|e). Here, f(D|e) and g(n|e) are some functions; and e is a structural parameter, such as the elasticity of labor supply to the net of the tax rate or a semi-elasticity that relates the probability of participation/consumption to the percentage change in financial incentives. Assumption 1 states the separability of the marginal tax (or co-payment) rate (D = 0/1) and agents\u2019 heterogeneity n in the optimal choice function. Remark 2 Almost all studies on the bunching estimation make this assumption, such as the model with no uncertainty and quasi-linear, iso-elastic preferences in the (Saez 2010; Chetty et al 2011; Einav, Finkelstein, and Schrimpf 2017; etc). For example, in Saez (2010), the optimal income choices z(1,n) = (1\u2212t)e \u00d7n and z(0,n) = (1\u2212t \u2212\u2206t)e \u00d7n satisfy the Assumption 1. 8 Given Assumption 1, Equation (2) can be re-written as: z = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z(1,n) if n \u2264nL z\u2217 if n \u2208(nL,nH] z(0,D) = z(1,n) f(0|e) f(1|e) if n > nH (3) That is, agents with n > nH who originally choose z(1,n) under the counterfactual linear policy respond to the kinked policy by setting z = z(0,n) = z(1,n) f(0|e) f(1|e) > z\u2217. Note that for marginal bunchers with nH, the optimal choice under kinked policy is given by z = z(0,nH) = z\u2217, and the location under the counterfactual linear policy is zct = z(1,nH) = z\u2217+\u2206z\u2217, where \u2206z\u2217is the change in z by the marginal bunching agent with nH due to the introduction of the kinked policy. Hence, the excess bunching at the kink point is the cumulative density of bunchers (i.e., agents with n \u2208(nL,nH]), and can be derived as: B = Z z\u2217+\u2206z\u2217 z\u2217 hct(z)dz, (4) where hct(z) denotes the counterfactual density distribution of z (i.e., the one under the linear low tax/co-payment rate plan).7 Therefore, for all agents with n > nH, we have, zct z = z(1,n) z(0,n) = f(1|e) f(0|e) = z\u2217+\u2206z\u2217 z\u2217 . (5) Equation (5) characterizes the relationship between the original location (under the counterfactual linear policy) and the new location (under the kinked policy) for each shifting agent. Remark 3 Studies in the bunching literature largely use Equation (5) to back out the structural parameter from the estimated value of \u2206z\u2217. For example, in Saez (2010), Equation (5) is (1\u2212t\u2212\u2206t 1\u2212t )e = z\u2217+\u2206z\u2217 z\u2217 . 7The observed density distribution of z is denoted by h(z). 9 Combining Equations (3) and (5), we can summarize the change in z as z zct = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z(1,n) z(1,n) = 1 if n \u2264nH z\u2217 z(1,n) if n \u2208(nL,nH] z(0,n) z(1,n) = f(0|e) f(1|e) = z\u2217 z\u2217+\u2206z\u2217 if n > nH . (6) Hence, moving from the counterfactual linear scenario to the state with the kinked policy, all agents with n > nH (shifters) reduce their z by a constant share , i.e., \u0000z\u2217 z\u2217+\u2206z\u2217 \u0001 \u22121 < 0, but do not bunch at the cutoff z\u2217.8 Meanwhile, agents with n \u2208(nL,nH] (bunchers) reduce their z to bunch at the cutoff z\u2217. By contrast, agents with the n \u2264nL(always-takers) remain unchanged. Equation (6) enables us to estimate the counterfactual density distribution h0() and the marginal bunchers\u2019 response \u2206z\u2217nonparametrically (details will be discussed in section 3.1). 2.2 Causal Inference under Kinked Bunching The bunching literature has focused on estimating the key structural parameter e since the methodological development by Saez (2010) and Kleven and Waseam (2013). What is equally (if not more) important is whether the bunching technique can be used to do a causal inference analysis; that is, estimating the effects from the introduction of a kinked policy on other outcome variables. In this subsection, we develop a methodological framework for the causal effects under the kinked design,9 and defer empirical details of the estimation framework in section 3. Denote yct() as the counterfactual outcome distribution under the linear policy (i.e., always under the low marginal tax/co-payment rate). The counterfactual outcome distribution could be of any functional form, depending on the data pattern. We will discuss the estimation of the counterfactual outcome distribution in subsection 3.2. Meanwhile, denote hct() as the corresponding counterfactual density distribution, the estimation of which will be discussed in subsection 3.1. 8Note that each shifter\u2019s adjustment (zct \u2212z) is not a constant, it depends on the initial location (zct). Alternatively, we can take the logarithm of z so that each shifter\u2019s adjustment will be a constant, i.e., lnzct \u2212lnz = ln z\u2217+\u2206z\u2217 z\u2217 . 9In a companion work, Diamond and Persson (2017) study the causal estimation using the notch design. 10 Denote y() as the observed outcome distribution under the kinked policy and h() the observed density distribution. We will analyze how the introduction of the kinked policy would lead to changes in the outcome distributions and define treatment effect estimators on bunchers, and shifters, as always-takers do not respond to the kinked policy. We start with the analysis of shifters and then move on to bunchers. 2.2.1 Change in Outcome Distributions of Shifters As discussed in the previous subsection 2.1, agents with zct > z\u2217(i.e., n > nH) would reduce their value of z when faced with a higher tax/co-payment rate under the kinked policy. That is, under the kinked policy, shifters would set z = zct z\u2217 z\u2217+\u2206z\u2217. The change in z would generate three changes to the outcome distribution: First, the relocation effect. Even if the change in z has no impact on y, such a \u201crelocation\u201d behavior (from zct to z) would change the outcome distribution. Therefore, if we directly compare yct with y along the y-axis, we are not comparing the same agent. However, we do know where each agent has moved to. Therefore, if we relocate each agent under the kinked policy back to his/her counterfactual location, then, comparing the values along the y-axis would give us the treatment effect on \u201cshifters\u201d. Treatment Effect on \u201cShifters\u201d \u03c4TE,shifter y = E[yn \u2212yct n |n \u2208shifters] = Z zmax z\u2217eltaz\u2217 \u0012 yr(zct)\u2212yct(zct) \u0013 hct(zct) R zmax z\u2217+\u2206z\u2217hct(zct)dzct dzct (7) where yr(zct) \u2261y(zct z\u2217 z\u2217+\u2206z\u2217) denotes the resulting auxiliary outcome distribution when we locate shifters at z back to their counterfactual location zct using the relation that z = zct z\u2217 z\u2217+\u2206z\u2217. That is, when we reshape the observed outcome distribution based on the changes in agents\u2019 location of z, the outcome distribution changes from y(z) to y(zct z\u2217 z\u2217+\u2206z\u2217) \u2261yr(zct). Remark 4. Estimation of treatment effect on shifters only requires yct(zct),hct(zct). Therefore, the estimation is model-free. However, one might want to understand what drives the change in outcome as a result of the kinked policy. The following second and third points cover it. Second, a reduction in the value from zct to z could directly affect outcome y. For ex11 ample, a reduction in taxable income could affect consumption, or a reduction in medical expenses could affect health. Define semi-elasticity \u00b5n \u2261 \u2206yn \u2206zn/zn, where n denotes agent heterogeneity as defined previously in subsection 2.1. Recall for shifters, we have zct z = z\u2217+\u2206z\u2217 z\u2217 . Therefore, (yn \u2212yct n )|due to direct change in z = \u00b5n \u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 . That is, the change in the value of z would directly lead to a level change in y. Third, changes in taxes or fees (T) that shifters pay could also affect outcome y. Recall that under the counterfactual policy, we have T ct(zct) = t \u00d7 zct and under the kinked policy, we have T(z) = (t +\u2206t)\u00d7z\u2212\u2206t \u00d7z\u2217= (t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u00d7zct \u2212\u2206t \u00d7z\u2217. Define \u2212\u03bbn \u2261\u2206yn \u2206Tn. Hence, we have yn \u2212yct n |due to change in T = \u2212\u03bbn \u0000(t + \u2206t) z\u2217 z\u2217+\u2206z\u2217\u00d7 zct n \u2212\u2206t \u00d7 z\u2217\u2212t \u00d7 zct n \u0001 = \u2212\u03bbnzct n \u0000(t + \u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212 t \u0001 + \u03bbn\u2206t \u00d7 z\u2217. That is, the change in tax or fees (T) would lead to both a level change and slope change in y. Assumption 2. (Additive) Assume the impacts from z and T on outcome y are additive.10 Given Assumption 2, Combining the second and third points, we have 11 \u03c4TE,shifter y = E[yn \u2212yct n |n \u2208shifters] = E h \u00b5n \u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 \u2212\u03bbnzct n \u0000(t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212t \u0001 +\u03bbn\u2206t \u00d7z\u2217i = E(\u00b5n) \u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 \u2212E(\u03bbnzct n ) \u0012 (t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212t \u0013 +E(\u03bbn)\u2206t \u00d7z\u2217 Assume homogeneous preference and thus single response elasticities across agents (i.e., \u00b5n = \u00b5,\u03bbn = \u03bb), a condition commonly made in the bunching literature (see, e.g., Saez 2010; Chetty et al. 2011, Kleven 2016;). The above equation can be simplified as \u03c4TE,shifter y = \u00b5 \u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 \u2212\u03bbE(zct n ) \u0012 (t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212t \u0013 +\u03bb\u2206t \u00d7z\u2217. (8) 10Assumption 2 is the same as the regression-kinked-design (RKD) literature, where y is the sum of three components: (i) predetermined heterogeneity; (ii) effects from running variable z; and (iii) effects from fees (or benefits) T. Hence, change in y depends on changes in z and T. 11In Appendix C, we provide a formal ground-up proof for the formation of counterfactual outcome distribution and the change in the outcome distribution due to agents\u2019 response to the kinked policy. The proof incorporates the following features: individual heterogeneity in the initial value of y and the impacts of z and T on y. The conclusion remains the same as shown in the main text. 12 Identifying Sufficient Statistics. We claim \u00b5 and \u03bb are sufficient statistics for estimating treatment effects under policy simulations because changes in policy cutoffs or tax/co-payment rates would result in changes in z and hence changes in outcome variables. We propose estimating these parameters by exploiting the level and slope change at z\u2217when comparing the distributions yct(zct) and the extrapolated yr(zct). Specifically, we have Level Change at z\u2217 = \u00b5 \u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 \u2212\u03bb(t +\u2206t)z\u2217\u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 (9) Slope Change at z\u2217 = \u2212\u03bb \u0012 (t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212t \u0013 (10) where estimation of the level change and slope change at z\u2217is explained in the next subsection 3.2. Remark 5 Note that calibration process identifies the parameters \u00b5,\u03bb for shifters. Because it is based on the slope and level changes at z\u2217by comparing the counterfactual outcome distribution with the extrapolated auxiliary distribution of shifters. Figure 1 illustrates the change in outcome distribution of shifters when the kinked policy is introduced. Figure 1: Change of outcome distribution for shifters 13 2.2.2 Change in outcome Distributions of Bunchers As discussed in subsection 2.1, agents with zct \u2208(z\u2217,z\u2217+\u2206z\u2217], i.e., n \u2208(nL,nH], would reduce their value of z and bunch at the cutoff (z = z\u2217) under the kinked policy. The changes in z would also generate changes in the outcome distribution. Under the sharp bunching scenario, agents with zct \u2208(z\u2217,z\u2217+\u2206z\u2217] relocate to z = z\u2217. As it is impossible to find a one-to-one mapping for each bunching agent, we take all the bunching agents as an entity and identify the average treatment effect on \u201cbunchers\u201d by comparing changes in the average outcome value. Treatment Effect on \u201cbunchers\u201d under Sharp bunching \u03c4TE,buncher y = E \u0002 yn \u2212yct n |n \u2208buncher \u0003 = Y buncher \u2212Y buncher,ct = ybuncher(z\u2217)\u2212 Z z\u2217+\u2206z\u2217 z\u2217 yct(zct) hct(zct) R z\u2217+\u2206z\u2217 z\u2217 hct(zct)dzct dzct (11) where ybuncher(z\u2217) denotes the average outcome of bunchers under the kinked policy, the estimation of which is shown below. Specifically, under the kinked policy, observations at the threshold z\u2217contain two groups of agents: (1) bunching agents with zct \u2208(z\u2217,z\u2217+\u2206z\u2217] who decrease their value to the threshold z = z\u2217 in response to the kinked policy; (2) always-takers with zct = z\u2217who remain at the threshold z = z\u2217. By contrast, under the counterfactual linear policy, there is only always-takers at the threshold z\u2217. Therefore, the density of bunchers under the kinked policy is given as hbunch(z\u2217) = h(z\u2217) \u2212 hct(z\u2217). Further, the observed average outcome y(z\u2217) is the weighted average of bunchers and always-takers, i.e., y(z\u2217) = \u0000ybuncher(z\u2217)hbuncher(z\u2217) + yct(z\u2217)hct(z\u2217) \u0001 1 h(z\u2217). Therefore, we obtain the average outcome of bunchers under the kinked policy ybuncher(z\u2217). That is, ybuncher(z\u2217) = y(z\u2217)h(z\u2217)\u2212yct(z\u2217)hct(z\u2217) h(z\u2217)\u2212hct(z\u2217) . (12) Figure 2 illustrates the change in outcome distribution of bunchers when the kinked policy is introduced. Treatment Effect on \u201cbunchers\u201d under Diffuse bunching 14 Figure 2: Change of outcome distribution for bunchers In reality, agents may bunch around the threshold (i.e., [z\u2217\u2212u1,z\u2217+u2]) due to optimization frictions. Still, we take all the bunching agents as an entity and identify the average treatment effect on \u201cbunchers\u201d by comparing changes in the average outcome value. However, there is one difference. Under sharp bunching, we only need to estimate the average outcome of bunchers under the kinked state at z\u2217(because all bunchers relocated to z\u2217); by contrast, under diffuse bunching, we now need to estimate the average outcome of bunchers under the kinked state over the whole diffuse region [z\u2217\u2212u1,z\u2217+u2]. \u03c4TE,buncher y = E \u0002 yn \u2212yct n |n \u2208buncher \u0003 = Y buncher \u2212Y buncher,ct = Z z\u2217+u2 z\u2217\u2212u1 ybuncher(z) hbunch(z) R z\u2217+u2 z\u2217\u2212u1 hbunch(z)dz dz\u2212 Z z\u2217+\u2206z\u2217 z\u2217 yct(zct) hct(zct) R z\u2217+\u2206z\u2217 z\u2217 hct(zct)dzct dzct (13) where ybuncher(z) (with z \u2208[z\u2217\u2212u1,z\u2217+ u2] denotes the average outcome of bunchers in each bin under the kinked policy and hbuncher(z) denotes the corresponding density. The estimation of ybuncher(z) and hbuncher(z) are explained below. 15 Specifically, consider first the left side of the diffusion region [z\u2217\u2212u1,z\u2217]. Under the kinked policy, this region contains two groups of agents: the bunchers and always-takers. Under the counterfactual policy, this region only has always-takers. Similar to equation (12) in the sharp bunching case, for each z \u2208[z\u2217\u2212u1,z\u2217], we can back out the outcome for bunchers under the kinked policy by deducting the average outcome of always-takers from the average observed outcome. That is, ybuncher(z) = y(z)h(z)\u2212yct(z)hct(z) h(z)\u2212hct(z) ,\u2200z \u2208[z\u2217\u2212u1,z\u2217] (14) Also, we have hbuncher(z) = h1(z)\u2212hct(z),\u2200z \u2208[z\u2217\u2212u1,z\u2217]. Next, consider the right side of the diffusion region (z\u2217,z\u2217+u2]. In the observed state, it also contains two groups of agents: the bunchers and shifters. Under the counterfactual linear policy, it only contains shifters. Therefore, for each z \u2208(z\u2217,z\u2217+ u2], we can back out the outcome for bunchers under the kinked policy by deducting the average outcome of shifters from the average observed outcome. That is, ybuncher(z) = y(z)h(z)\u2212yshi fter(z)hshi fter(z) h(z)\u2212hshi fter(z) ,\u2200z \u2208(z\u2217,z\u2217+u2] (15) where hshi fter(z) and yshifter(z) correspond to the density and outcome distributions of shifters for z \u2208(z\u2217,z\u2217+ u2], which can be inferred by extrapolating the distributions of shifting agents (the observed distributions) in the region with z > z\u2217+ u2 to the diffuse region with z \u2208(z\u2217,z\u2217+ u2]. Similarily, we have hbuncher(z) = h(z)\u2212hshi fter(z),\u2200z \u2208(z\u2217,z\u2217+u2]. Remark 6 Note that while sharp bunching agents and diffused bunching agents may be different, it does not pose any threats to our estimation framework. This is because we consider all bunching agents as an entity, and compare their average observed outcomes under the kinked policy to the average counterfactual outcomes under the linear policy. In other words, we are comparing the same group of agents under the treated and the control states. Meanwhile, the data allow us to distinguish sharp bunchers from diffused bunchers (under-shooting or over-shooting), from which we can compare their predetermined characteristics to further shed light on the selection of diffused bunching. Remark 7 For bunching agents, a reduction in z due to the kinked policy could directly affect y, shown as \u00b5( z\u2217 zct \u22121),\u2200zct \u2208(z\u2217,z\u2217+ \u2206z\u2217]. Meanwhile, change in T could also affect y, shown as \u2212\u03bb(z\u2217\u2212zct)t,\u2200zct \u2208(z\u2217,z\u2217+ \u2206z\u2217]. Therefore, we can draw the linkage between the treatment 16 effect on \u201cbunchers\u201d and structural parameters: \u03c4TE,buncher y = R z\u2217+\u2206z\u2217 z\u2217 \u00b5( z\u2217 zct \u22121)\u2212\u03bb(z\u2217\u2212zct)tdzct. Hence, one can also use the treatment effect on bunchers \u03c4TE,buncher y to identify the parameters (\u00b5,\u03bb) for bunchers, provided that there are at least two kinks to provide enough moments. 3 Empirical Estimation Our aforementioned estimation framework for the causal inference under the kinked bunching relies on the estimation of counterfactual density hct() and outcome yct() distributions under the linear policy. In this section, we elaborate on the empirical details to estimate these counterfactuals. One important and common feature of kink settings is that all agents (both shifters and bunchers) to the one side of the policy threshold respond to the kinked policy. This is in contrast to the assumption under notch settings that the adjustment only happens within a certain range (manipulation region) around the threshold (see Diamond and Persson, 2017).12 To account for the responses by shifters, we propose a new method to recover the counterfactual density distribution hct() together with the marginal buncher\u2019s response \u2206z\u2217, and the counterfactual distribution of yct(). Our estimation method of the counterfactual density distribution has several desired properties over the conventional approach used in the bunching literature. First, it automatically satisfies the integration constraint that the number of agents under the counterfactual and that under the observed distribution should be the same. Second, it allows for the fact that the observed and counterfactual density distribution for shifters can be non-parallel because the adjustment by shifting agents is non-uniform, i.e., z \u2212zct = zct( z\u2217 z\u2217+\u2206z\u2217\u22121). This relaxes the assumption made by Chetty et al. (2011) that the counterfactual density distribution is a parallel upward shifting of the observed one within the range with z > z\u2217. Last, our empirical strategy is model-free and can be applied to most kink settings. The estimation of counterfactual density and outcomes distributions 12In fact, there are also interior responses in the standard notch design when there is both level and slope changes of incentives around the threshold. For example, Kleven and Waseem (2013), Kleven (2016). However, such interior responses are largely ignored in the practical applications of notched designs. As pointed out in Kleven (2016), interior responses are larger for kinks than for notches because in real life changes of marginal tax rates are typically larger for the former than for the latter. Chetty el al. (2011) deals with the interior responses under kink settings, by assuming that the counterfactual density distribution is a parallel upward shifting of the observed one in the region with z > z\u2217. 17 do not require or depend on modeling assumptions, except Assumption 1 which states that agents\u2019 choice of z depends on individual heterogeneity (n) and the tax/co-payment rate (D = 0/1) and they appear in the form of multiplication. Assumption 1 is valid in most bunching settings. 3.1 Estimating Counterfactual Density Distribution We start with the strategy to recover the counterfactual density distribution hct(z), which can be applied to any kinked settings. As shown in Equation (6), agents\u2019 responses to the kinked policy can be summarized as: (i) alwaysr-takers with zct \u2264z\u2217remain unchanged, i.e., z = zct \u2264z\u2217; (ii) bunchers with zct \u2208(z\u2217,z\u2217+ \u2206z\u2217] bunch at the threshold, i.e., z = z\u2217< zct; (iii)shifters with zct > z\u2217+\u2206z\u2217reduce their value but do not bunch at the threshold, i.e., z = zct \u00d7 z\u2217 z\u2217+\u2206z\u2217> z\u2217. Figure 3 illustrates the observed density distribution of z under the kinked policy (the solid curve) and the counterfactual density distribution under the linear case (the dashed curve). First, to the right of the threshold, it is the distribution of always-takers. As their behaviors remain unchanged in response to the kinked policy, the observed and counterfactual density distributions overlap. Second, agents with zct \u2208(z\u2217,z\u2217+ \u2206z\u2217] are the bunching agents and they move to the threshold z\u2217in response to the kinked policy, generating the bunching mass observed at z\u2217in Figure 3. Third, agents with zct > z\u2217+ \u2206z\u2217are the shifting agents and they reduce their z in response to the kinked policy but stay above z\u2217(i.e., stay in the interior of the upper bracket). These interior responses are represented by the leftward shift of the density distribution above z\u2217. To recover the counterfactual density distribution hct() from the observed density distribution h(), we design a two-step estimation framework. First, we move shifters back to their counterfactual locations, which leads to the estimation of hct(z) within the region (z\u2217+ \u2206z\u2217,\u221e) for shifters. Then, we extrapolate hct() for bunching agents using the information of hct() for shifters and always-takers, (as hct = h() in the region [zmin,z\u2217] for always-takers). Specifically, it is implemented by the following algorithm. First, given the observed location z and an initial guess c \u2206z\u2217initial for shifting agents, we infer the counterfactual choice zct,initial based on the following relation derived from equation (6): zct,initial = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z if z < z\u2217\u2212u1 zz\u2217+d \u2206z\u2217initial z\u2217 if z > z\u2217+u2 (16) 18 Figure 3: Change in the Density Distribution where [z\u2217\u2212u1,z\u2217+ u2] is the bunching region with diffuse, in which u1 = u2 = 0 under sharping bunching. The inferred zct,initial for shifters forms the counterfactual density distribution hct,initial (z),\u2200z \u2208 ((z\u2217+u2)z\u2217+d \u2206z\u2217initial z\u2217 ,\u221e) ,13 whereas the observed density distribution for always-takers is the same as counterfactual density distribution, i.e., hct,initial (z) = h(z),\u2200z \u2208(zmin,z\u2217\u2212u1). Next, we obtain the counterfactual density for bunching agents based on the assumption that the counterfactual density distribution is smooth. Specifically, we use the standard approach in the bunching literature to fit a flexible polynomial to the counterfactual distribution for the always-takers and shifters outside the region [(z\u2217\u2212u1),(z\u2217+ u2)z\u2217+d \u2206z\u2217initial z\u2217 ], and extrapolate the fitted distribution inside the region. Empirically, we group agents into bins indexed by j, and estimate the following regression: hct,initial j = p \u2211 k=0 \u03b2k(zct,initial j )k +\u03b5 j (17) if zct,initial j < (z\u2217\u2212u1) or zct,initial j > (z\u2217+u2)z\u2217+ c \u2206z\u2217initial z\u2217 , 13When we relocate shifters back to their original location, we reshape observed density distribution h(z),\u2200z \u2208(z\u2217+ u2,\u221e) into h(zct z\u2217 z\u2217+\u2206z\u2217,initial ) \u2261hct,initial(zct,initial),\u2200zct,initial \u2208((z\u2217+ u2)z\u2217+d \u2206z\u2217initial z\u2217 ,\u221e). 19 where hct,initial j is the number of agents in bin j; zct,initial j is the inferred z level in bin j based on the initial guess c \u2206z\u2217initial; and p is the polynomial order. The counterfactual bin counts in the region [(z\u2217\u2212u1),(z\u2217+u2)z\u2217+d \u2206z\u2217initial z\u2217 ] are obtained as the predicted values from Equation (17). After recovering the hct,initial (z) for the full range of z, excess bunching (with diffusion) at the threshold can then be computed as14 b Binitial = Z z\u2217 z\u2217\u2212u1 \u0010 h(z)\u2212hct,initial(z) \u0011 dz+ Z z\u2217+u2 z\u2217+1 \u0010 h(z)\u2212hshi ft(z) \u0011 dz, (18) where hshi ft(z) denotes the density of shifters under the kinked policy. Note that to the right of the bunching region, the observed density distribution contains only shifting agents, and hence, hshi ft(z) = h(z) for z > z\u2217+u2. However, within the diffuse region (z\u2217,z\u2217+u2], the observed postkink density distribution contains both shifters and diffused bunchers. Assuming that hshi ft(z) is smooth, we then use the observed distribution h(z) in the region z > z\u2217+ u2 to extrapolate the distribution of shifting agents in the diffusion region (z\u2217,z\u2217+u2].15 Third, we compute the updated c \u2206z\u2217updated based on the following relation: b Binitial = Z z\u2217+d \u2206z\u2217updated z\u2217+1 hct,initial(z)dz, (19) and check whether c \u2206z\u2217updated equals c \u2206z\u2217initial. If c \u2206z\u2217updated > c \u2206z\u2217initial, we increase the value of c \u2206z\u2217initial and repeat the above steps until we have c \u2206z\u2217updated = c \u2206z\u2217initial. Following the above process, we obtain the estimated marginal adjustment c \u2206z\u2217and the counterfactual density distribution \u02c6 hct(z). In addition, following the bunching literature, given the kinked policy and estimated bunching response c \u2206z\u2217, we can calibrate e using the equation f(D=1|e) f(D=0|e) = z\u2217+d \u2206z\u2217 z\u2217 . For example, in Saez (2010), the equivalent equation would be (1\u2212t)e (1\u2212t\u2212\u2206t)e = z\u2217+d \u2206z\u2217 z\u2217 . Four remarks about our proposed method are worth noting. First, our estimation does not depend on the initial guess value c \u2206z\u2217initial, as it converges to the true unique \u2206z\u2217. The reason is as follows. Suppose our initial guess c \u2206z\u2217initial < \u2206z\u2217(the true value). This means z\u2217+d \u2206z\u2217initial z\u2217 < z\u2217+\u2206z\u2217 z\u2217 , 14The excess bunching at the threshold under the sharp bunching is b Binitial = h(z\u2217)\u2212hct,initial(z\u2217). 15Alternatively, we can use the inferred hct,initial(z) and the relation that z = zct,initial z\u2217 z\u2217+d \u2206z\u2217initial to obtain h1,shift(z) for z \u2208(z\u2217,z\u2217+u2]. 20 and hence, the elasticity b einitial < e. In other words, our guessed c \u2206z\u2217initial would be consistent with a lower level of bunching around the cutoff, compared to the true value, i.e., b Binitial < B. However, as B is fixed16 and B > b Binitial, our updated value c \u2206z\u2217updared > c \u2206z\u2217initial, indicating our initial guess is too low and we need to increase the value of our initial guess. The self-correcting feature is important to our estimation process and leads to the convergence of the estimated value c \u2206z\u2217. Second and importantly, our method accommodates the fact that shifters further away from the policy threshold have less adjustment in z, and therefore, the observed and the counterfactual density distributions to the right of the threshold may not be parallel. Our approach relaxes the parallel shifting assumption by Chetty et al. (2011).17 Third, by definition, our method satisfies the integration constraint that the number of agents under the observed and counterfactual density distributions should be the same, as our approach moves the exact shifting agents back to their original locations. Fourth, our method does not depend on the assumption of the counterfactual linear policy. In the main text, we assume the counterfactual is a linear policy with a low tax/co-payment rate. However, if we assume the counterfactual is a linear high tax/co-payment rate, the analysis is still valid with corresponding adjustments. Details are shown in Appendix B. Moreover, regardless of which counterfactual policy we assume, the estimated relation between z(D = 1|e) and z(D = 0|e) and hence the elasticity remains the same. 3.2 Estimating Counterfactual Outcome Distribution and Parameters In subsection 2.2, we lay out the framework to estimate causal effects under the kink setting, which incorporates the fact that all agents above the threshold have incentives to adjust their behaviors. Now, we discuss empirical details, in particular, the procedure to recover the counterfactual outcome distribution yct (), which is a crucial step to identify the causal effects of the kinked policy. 16Under sharp bunching, B = h(z\u2217) \u2212hct(z\u2217), where hct(z\u2217) mainly depends on the shape of hct(z) = h(z)\u2200z < z\u2217. Therefore, B does not depend much on the initial guess of c \u2206z\u2217initial. 17Chetty et al. (2011) estimate a regression of the following form: cj \u0012 1+I{zj>z\u2217+u2} \u02c6 B \u2211\u221e z\u2217+u2 cj \u0013 = \u03b20 + p \u2211 k=1 \u03b2k(z j)k + z\u2217+u2 \u2211 i=z\u2217\u2212u1 \u03b3iI \u0002 z j = i \u0003 +\u03b5 j. The term I{zj>z\u2217+u2} \u02c6 B \u2211\u221e z\u2217+u2 cj is a parallel upward shift of observed density, which captures the change in z for shifters such that the integration constraint is met. 21 Specifically, first, given that always-takers do not respond to the kinked policy (zct = z) and pay the same amount of money (or tax) T, their observed outcomes are the same as their counterfactual outcomes, that is, yct n = yn,\u2200n \u2208always-takers. Therefore, the counterfactual outcome distribution for always-takers is yct(zct) = y(z),\u2200zct < z\u2217. Second, for each shifter, in the previous subsection, we have recovered marginal bunchers\u2019 responses \u2206z\u2217and each shifter\u2019s counterfactual location zct = zz\u2217+\u2206z\u2217 z\u2217 ,\u2200z > z\u2217which forms the counterfactual density distribution. To make sure that we are comparing the same shifter under the counterfactual and the kinked policies, we locate shifters back to their initial location, which generates the auxiliary outcome distribution under kinked policy yr(zct),\u2200zct > z\u2217+\u2206z\u2217.18 It represents each shifter\u2019s value of y under the kinked policy, including the direct impacts from changes in z and the impacts from changes in T, while excluding the relocation impacts (as we have located shifters back to their counterfactual locations). As shown in Equations (8), there would be both level and slope changes when comparing the counterfactual outcome distribution yct(zct) with the auxiliary outcome distribution under the kinked policy yr(zct). Moreover, if we extrapolate the obtained auxiliary distribution yr(zct) to the cutoff z\u2217, then the slope and the level change at z\u2217could be used to calibrate the sufficient statistics \u00b5,\u03bb as shown in Equations (9, 10).19 These parameters represent how changes in z directly impact y and how changes in T (due to change in z and the kinked policy) impact y. Empirically, we jointly estimate the counterfactual outcome distribution yct and the slope and level changes. Specifically, we use the observed (also the counterfactual) outcome distribution for always-takers (yct(zct) = y(z),\u2200zct < z\u2217\u2212u1) and the obtained auxiliary outcome distribution for shifters (yr(zct),\u2200zct > (z\u2217+u2)z\u2217+\u2206z\u2217 z\u2217 ) to fit a flexible polynomial distribution, allowing intercept and slope changes at the threshold. 20 18Note yr(zct) \u2261y(zct z\u2217 z\u2217+\u2206z\u2217),\u2200zct > z\u2217+\u2206z\u2217. 19We could also check slope and level changes at other locations, apart from z\u2217, by plugging zct with the corresponding value of alternative locations. It does not affect the calibrated value of parameters \u00b5,\u03bb. 20In notch setting with just level change of incentives at the threshold, Diamond and Persson (2017) include the term I h z0 j \u2265z\u2217i in their estimation equation to capture the payoff (change in outcome) of just passing the threshold in a world without adjustment in z. In kinked settings, even if agents do not manipulate/adjust their z, the kinked policy would lead to slope change at the threshold. Further, agents do adjust their value of z, leading to level changes on outcome y. Therefore, even if we locate shifters back to their initial location, the auxiliary outcome distribution under the kinked policy would still indicate slope and level changes at the threshold, compared to the counterfactual (observed) distribution under the linear policy to the left of the threshold. 22 The estimation equation for the counterfactual outcome distribution is as follows: yreg j = q \u2211 k=0 \u03b1k(zct j )k +a0I \u0002 zct j > z\u2217\u0003 +a1I \u0002 zct j > z\u2217\u0003 zct j +\u03b5j (20) if zct j < (z\u2217\u2212u1) or zct j > (z\u2217+u2)z\u2217+\u2206z\u2217 z\u2217 where j indicates the bin; and q is the polynomial order; yreg j = yj = yct j for always-takers with zct j < (z\u2217\u2212u1), and yreg j = yr j for shifters with zct j > (z\u2217+u2)z\u2217+\u2206z\u2217 z\u2217 . Further, Combined, we can calibrate structural parameters \u00b5,\u03bb Note the estimated coefficients b a0 and b a1 reflect the level change and the slope change at the threshold respectively. a0 captures the level change between the auxiliary outcome distribution and the counterfactual outcome distribution of shifters, while a1 captures the corresponding slope change. Hence, following Equations (9, 10), we calibrate the values of \u03bb,\u00b5, based on the following equations: a0 = \u00b5 \u0000z\u2217+\u2206z\u2217 z\u2217 \u22121 \u0001 +\u03bb(t +\u2206t)z\u2217\u0000z\u2217 z\u2217+\u2206z\u2217\u22121 \u0001 a1 = \u03bb \u0012 (t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212t \u0013 With two equations and two unknowns, we can calibrate \u03bb,\u00b5. Relying on the assumption that the relationship between outcome y and z would be smooth under the counterfactual policy, we obtain the counterfactual outcome distribution from Equation (20) as c yct j = \u2211q k=0 c \u03b1k(zct j )k. Meanwhile, the treated outcome for shifters yshi ft j in [z\u2217,z\u2217+u2) is unobserved with diffused bunching, given that this region contains both shifters and diffused bunchers under the kinked policy. However, for the range z > z\u2217+u2, there is only shifters under the kinked policy, therefore, yshi ft j = yj for z > z\u2217+ u2. Therefore, we fit a flexible polynomial to the observed distribution of yj for shifters in the range z > z\u2217+ u2 and extrapolate the fitted distribution to obtain yshi ft j in (z\u2217,z\u2217+ u2], with the assumption that the relationship between observed outcome yshi fter and z Therefore, we include both the level and slope changes at the threshold, to capture change in outcomes for shifters. 23 under the kinked policy is smooth to the left of z\u2217+u2.21 Given that we have recovered the counterfactual density distribution in Equation (17), the counterfactual outcome distribution in Equation (20), and the density and outcome distributions of shifters within the diffuse bunching region, we can estimate the impacts of the kinked policy on bunchers and shifters following Equations (7) and (13). Remark 8 When comparing our approach with regression kinked design (RKD), there are certain similarities and some differences as well. First, RKD does not allow adjustment of z around the threshold, indicating that agents\u2019 heterogeneity is smooth around the threshold. In our estimation process, we mimic this intuition by locating shifters back to the counterfactual location of z. Second, in RKD, there is no level change but there is a slope change at the threshold due to the kinked incentives T (e.g., maximum claim on unemployment insurance). That is, even if z does not change, changes in the slope of T at the threshold would lead to a change in the slope of Y at the threshold. Therefore, RKD allows us to estimate the impact of T on Y (i.e., \u03bb in our setup). However, in bunching, even if we locate shifters back to the counterfactual location zct, the fact that their z did change would lead to changes in Y as well. Therefore, on top of the slope change as in RKD, we would have a level change due to (i) direct impact from \u2206z on y and (ii) impact from \u2206T (due to \u2206z) on y. Therefore, we would have both slope and level changes at the threshold. It is more complex, but it also adds more calculation power in the sense that we can use the level change to identify the direct impact of z on y (i.e., \u00b5 in our setup), which is non-identified under RKD as there is no change in z to start with. In terms of policy suggestions, apart from evaluating where to set the cutoff (which is also answered by RKD), we can also evaluate to what extent we should set the difference in marginal incentives below/above the threshold. It enables us to search for the optimal policies within a large scope of choices. 3.3 Discussion on Exclusion Restriction Recall that in Section 2, we defined the average treatment effects of the policy on bunchers and shifters and showed that estimating the treatment effects requires us to recover the counterfactual density and outcome distributions. Subsections 3.1 & 3.2 demonstrate the steps for estimating the counterfactual distributions, under the assumption that the counterfactual distributions are smooth. 21Alternatively, we can use the inferred c yct j , the estimated b a0, b a1, the counterfactual density c hct j and the relation that z = zct z\u2217 z\u2217+\u2206z\u2217to calculate yshi ft j for shifters with z \u2208(z\u2217,z\u2217+u2]. 24 However, one might be concerned whether the counterfactual distributions are correctly estimated, and if not, it would cast bias to the estimated treatment effects. Blomquist et al. (2021) pointed out that when the distribution of agent heterogeneity is unrestricted, the estimated counterfactual density distribution (following a parametric method) could be of any form and thus the estimated extent of bunching is not informative of the policy response. Thus, Blomquist et al. (2021) suggest exploring cross-sectional or over-time variation from the policy threshold to help discipline the estimated counterfactual density distribution. We address the concern in two ways. First, following Blomquist et al. (2021), we suggest exploring the density distribution of the same population before the focal policy threshold starts (i.e., \u201cover-time variation\u201d) or the density distribution of another subset of the population which is not subject to the same threshold (i.e., \u201ccross-sectional variation\u201d) to infer whether the counterfactual distribution is correctly specified. We can check whether the distribution during these placebo tests follows a similar pattern (or shape) as the estimated counterfactual distribution of the focal group. Alternatively, we can use the distribution of these placebo groups (i.e., the focal group before policy starts, or, other groups which are not subject to the same policy) as the counterfactual distributions. Second, we would like to clarify that Blomquist et al. (2021)\u2019s critique of the lack of information on the shape of the counterfactual distribution using a single kink policy is less severe in our setting because our method does not require assumptions on the functional form of the counterfactual density distribution on shifters and bunchers. We infer the counterfactual distribution using the non-parametric method by directly calculating how much each shifter has adjusted his/her value of z, which automatically forms the counterfactual density distribution. The only place we used parametric assumption is when inferring the counterfactual density distribution of bunchers (the middle part), for which we assume that the counterfactual distribution is smooth and can be extrapolated from the left and the right part of the distribution. In short, our method for estimating counterfactual density does not require a parametric assumption on the whole counterfactual distribution; it uses the parametric assumption only for the middle part of the counterfactual distribution. Therefore, the potential bias is supposed to be less server. Nevertheless, we suggest following Blomquist et al. (2021) by exploring the cross-sectional or over-time variation from the policy threshold. If one finds the counterfactual density and outcome distributions are correctly specified and there is no discontinuity at the kink point, then any difference between the observed and the counterfactual distribution is driven by agents\u2019 response to the kinked policy. Therefore, it 25 alleviates the concern that the the estimated treatment effect is due to other reasons, rather than the policy response. 4 Extensions In this section, we discuss a number of extensions to our baseline framework presented in the previous sections, including the rounding effects, unresponsive to the kinked policy by some agents due to optimization frictions (denoted as stayers), the heterogeneity in the structural parameter e, the relabeling behavior of z. In each scenario, we discuss the potential biases with our baseline analysis discussed in the previous sections and the remedy strategies. 4.1 Reference Points When the policy threshold is a reference point, the excess bunching at such threshold may also capture the reference point effect, which may lead to over-estimated responses compared to the true values. In other words, with one moment (estimated excess bunching mass), there are two underlying structural parameters (i.e., the reference point effect and the policy effect). To isolate the policy effect from the reference point effect, we need an additional empirical moment to jointly identify these two structural parameters. One commonly used approach in the bunching literature is to exploit the excess bunching at similar reference points that are not thresholds to control for the bunching due to the reference point effect at the threshold (e.g., Chetty et al. 2011; Kleven & Waseem 2013; Best & Kleven 2016) with the assumption that reference point effects are same across similar reference points.22 Following this literature, we revise the density distribution estimation in Equation (17) by including a set of reference point fixed effects to contain the potential bias from the reference point 22In addition, they often assume an equal degree of excess bunching at the same reference point under the treated and counterfactual states. For instance, Chetty et al. (2011) adjust for the interior responses of shifting agents by allowing for an upward shift in the density distribution, which is equivalent to assuming that there are the same degree of excess bunching at reference points between the treated and counterfactual states. We consider the same assumption in all applications. 26 effects: hct,initial j = p \u2211 k=0 \u03b2k(zct,initial j \u2212z\u2217)k + \u2211 r\u2208R \u03b3rI hzj r \u2208N i +\u03b5j if zct,initial j < (z\u2217\u2212u1) or zct,initial j > (z\u2217+u2)z\u2217+ c \u2206z\u2217initial z\u2217 where N is the set of reference points; and R is a vector of multiples that capture similar reference points. The counterfactual density distribution is \u02c6 hct,initial j = \u2211p k=0 \u02c6 \u03b2k(zct,initial j \u2212z\u2217)k + \u2211 r\u2208R \u02c6 \u03b3rI \u0014 zct,initial j r \u2208N \u0015 . To address the concern that the reference point effects may generate potential bias in the estimation of the outcome distribution, we take a similar remedy approach. Specifically, we revise the estimation framework of the outcome distribution (20) by including a set of reference point fixed effects: yreg j = q \u2211 k=0 \u03b1k(zct j \u2212z\u2217)k +a0I \u0002 zct j \u2264z\u2217\u0003 +a1I \u0002 zct j \u2264z\u2217\u0003 (zct j \u2212z\u2217) + \u2211 r\u2208R \u03c1rI hzj r \u2208N i +\u03b5j if zct j < (z\u2217\u2212u1) or zct j > (z\u2217+u2)z\u2217+\u2206z\u2217 z\u2217 The counterfactual outcome distribution is given as \u02c6 yct j = \u2211q k=0 \u02c6 \u03b1k(zct j \u2212z\u2217)k + \u2211 r\u2208R \u02c6 \u03c1rI \u0014 zct j r \u2208N \u0015 . 4.2 Stayers Our framework as mentioned earlier implicitly assumes that all agents behave according to the optimal equation (6) without friction. However, as pointed out in the bunching literature (e.g., Kleven and Waseem 2013), optimization frictions (such as adjustment costs and inattention) may induce agents to stay at their original locations even though they would adjust z in the absence of frictions. We denote these agents as stayers and extend our estimation approach to incorporate stayers in calculating causal effects. Specifically, bunching studies often introduce an additional parameter to characterize the adjustment costs in the presence of optimization frictions (explaining the gap between the bunching 27 sizes with and without attenuation from frictions). It then uses additional empirical moments to uncover the parameter corresponding to optimization frictions and to estimate the underlying structural parameter that governs agents\u2019 behavior without frictions (e.g., Chetty et al. 2010; Kleven and Waseem 2013; Gelber et al. 2014; Manoli et al. 2016). For example, in the notch design with strictly dominated regions (i.e., the upward tax notches in the labor-leisure decision), Kleven and Waseem (2013) develop the approach that uses the observed density in the strictly dominated region to estimate the share of stayers (with the assumption of a constant share within this region).23 However, the change in the marginal incentives across the policy threshold in the kink setting offers only one empirical moment \u2013the size of bunching \u2013 for estimation. Instead, kink studies often construct alternative additional moments generated by either multiple thresholds with differentsized kinks or the changes in the size of a kink at a given threshold over time to jointly identify the structural parameters of interest and the friction parameter, with the assumptions that friction and elasticity parameters are the same at multiple thresholds or over time (e.g., Chetty et al. 2010, 2011; Gelber et al 2014). These approaches also apply to our setup. In addition, we propose a new approach to estimate the share of stayers by exploiting changes in the curvature of density distribution under the treated and the counterfactual states. Specifically, we follow the practice of Kleven and Waseem (2013) and others in assuming a fixed share of stayers \u03b1 at each bin of z. With the introduction of the kinked policy, (1 \u2212\u03b1) share of shifting agents relocates from zct to z (with the relation between z and zct defined in Equation 6), and \u03b1 share of agents stay unchanged (due to optimization frictions). Such relocation leads to a change of the density distribution from h0(z) to h1(z) as follows: h1(z) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 hct(z) , if z < z\u2217 R z\u2217+\u2206z\u2217 z\u2217 (1\u2212\u03b1)hct(z)dz+hct(z\u2217) , if z = z\u2217 (1\u2212\u03b1)hct(z\u00d7 z\u2217+\u2206z\u2217 z\u2217 )+\u03b1hct(z) , if z > z\u2217. (21) Specifically, for each bin j of shifters (i.e., z > z\u2217), it contains two groups of agents: \u03b1 share of stayers and (1 \u2212\u03b1) share of relocated shifting agents. For bunchers, the density at the threshold 23More generally, the downward tax notches and notches in contexts other than the labor-leisure decision do not always contain strictly dominated regions. In such cases, studies (see, for example, Best et al 2015; Manoli and Weber 2016) recover the constant share of stayers from a very narrow range above/below the threshold by ruling out extreme preferences. 28 contains those bunching from the initial range of (z\u2217,z\u2217+\u2206z\u2217) and those with the initial value of z\u2217. Always takers remain unchanged. We perform the following procedure to estimate the share of stayers and counterfactual density and outcome distributions. For a given guess of the value of \u03b1 and the shape of a polynomial function of hct(), we use equation (21) to obtain h() to fit the observed density distribution under the kinked policy. We then select the share of stayers and estimated polynomial coefficients that minimize the mean squared error, 24 which allows us to obtain the estimated \u02c6 \u03b1 and \u02c6 \u2206z\u2217. Note changes in the curvature of the density distribution are used to capture the additional parameter \u03b1. Note that the counterfactual outcome values for stayers and shifters at the same value of z might be different. Without loss of generality, assume the relative difference is captured by \u03b2. For a given guess of the value of \u03b2 and the shape of a polynomial function of yct(), using the estimated \u2206z\u2217and equations (9, 10), we fit the observed outcome distribution under the kinked policy. We then select the relative difference in counterfactual outcome between stayers and shifters and the estimated polynomial coefficients that minimize the mean squared error, which allows us to obtain the estimated \u03b2, the counterfactual outcome distribution. Meanwhile, the slope and level change at the threshold from the regression (with some adjustments) are used to calibrate the parameters \u00b5,\u03bb. Note that, similar to the density estimation part, we use changes in the curvature of the outcome distribution to capture the additional parameter \u03b2. 4.3 Heterogeneity in Structural Parameter In our benchmark analysis, we assume homogeneous preference across agents; that is, a single structural elasticity e across agents. Given that agents may have different responses to the policy, we extend our estimation framework to account for heterogeneity in e. Specifically, consider a joint distribution of innate agents\u2019 innate type \u03c6 and response elas24The intuition is as follows. Suppose in reality there are stayers (\u03b1 > 0). If we impose \u03b1 = 0, we would have a maximum achievable prediction power. However, if we allow \u03b1 > 0, we would have a higher prediction power for the density distribution, by capturing the curvature change. To further help the understanding, consider a log transformation x = lnz. Note xct = x + cst, where cst = lnz\u2217+\u2206z\u2217z\u2217. We draw the density distribution of x. When \u03b1 = 0, we have e hct(x) = e h \u0000x \u2212cst \u0001 . However, when \u03b1 > 0, the above equality no longer holds, that is, e hct and e h are no longer sharing the same functional transformation. Therefore, if in reality \u03b1 > 0 but we impose \u03b1 = 0, our prediction power would be lower. Hence, information about changes in the functional form captures \u03b1. 29 ticity e, denoted as f (\u03c6,e), which determines a counterfactual density distribution \u02dc hct (z,e) under the linear policy and hct (z) \u2261 R e \u02dc hct (z,e)de. For each value of e, behavior responses can be characterized as in the benchmark model, in which the marginal bunching agents\u2019 adjustment \u2206z\u2217 e is increasing in e. In the bunching literature with homogeneous preference, the structural parameter e is inferred from the observed excess bunching mass B in the data with one empirical moment linking B to e as derived in Equation 4. However, when there is heterogeneity in e, Equation 4 becomes B = R e R z\u2217\u22121 z\u2217\u2212\u2206z\u2217 e \u02dc h0(z,e)dzde. Hence, linking one empirical moment B to multiple parameters e causes the empirical estimation to fall short of the identification freedom and power. If the dimensions of heterogeneity are known, we can split the whole sample into subsamples according to these determinants, as conducted by Best et al. (2015b). This allows the unbiased estimation within subsamples with relatively homogeneous preferences. However, without the knowledge of heterogeneity, one approach commonly used in the bunching literature to address the freedom issue in the presence of preference heterogeneity is to estimate the average response E [\u2206z\u2217 e]. Specifically, using the procedure proposed in section 3.1 and replacing \u2206z\u2217by E [\u2206z\u2217 e], we can estimate the counterfactual density distribution together with the average response E [\u2206z\u2217 e] level, and then estimate the auxiliary outcome distribution (by locating agents back to their counterfactual location) and the counterfactual outcome distribution using the procedure discussed in section 3.2 and hence, the treatment impacts. However, the estimated elasticity and treatment effects essentially represent the elasticity and treatment effects at the average response, instead of the average elasticity and treatment effects, creating potential aggregation biases. In the following, we use a simple example to discuss the aggregation bias from heterogeneous preference in the kink design and how it affects our estimations of the counterfactual density distribution and the policy effects.25 Specifically, consider a case with two groups of agents at each level of z, denoted as L,S. Their shares are denoted as \u03b1L,\u03b1S, with \u03b1L + \u03b1S = 1. They hold different structural parameters; without loss of generality, we assume eL > eS. A larger e implies a larger bunching response, i.e., \u2206z\u2217,L > \u2206z\u2217,S. We first discuss the potential biases in the estimation of counterfactual density distribution hct (z) and the marginal buncher\u2019s response \u2206z\u2217with the heterogeneity in e. Consider shifters with a value zx < z\u2217. Suppose we ignore the heterogeneity, we obtain the 25In the notch design, the literature generally considers such aggregation bias to be small (Kleven, 2016). For example, Kleven and Waseem (2013) discuss the bound of such aggregation bias in the case of notch design and Best et al. (2015b) conduct a rich set of subsample analyses and show that such aggregation bias is very small under the notch design. 30 estimated average response level f \u2206z\u2217\u2261\\ E [\u2206z\u2217 e] from the excess mass B and hence would have \u02dc zct x = zx \u0010 z\u2217+g \u2206z\u2217 z\u2217 \u0011 . Therefore, the estimated counterfactual density at \u02dc zct x is given by \u02c6 hct(\u02dc zct x ) = h(\u02dc zct x z\u2217 z\u2217+g \u2206z\u2217) = h(zx) = \u03b1Lhct(zct,L x )+\u03b1Shct(zct,S x ), where zct,L x = zx \u0010 z\u2217+\u2206z\u2217,L z\u2217 \u0011 and zct,S x = zx \u0010 z\u2217+\u2206z\u2217,S z\u2217 \u0011 . However, the true counterfactual density at \u02dc zct x should be hct(\u02dc zct x ) = \u03b1Lhct \u0010 zx(z\u2217+g \u2206z\u2217 z\u2217 ) \u0011 +\u03b1Shct \u0010 zx(z\u2217+g \u2206z\u2217 z\u2217 ) \u0011 . Hence, using the average response f \u2206z\u2217generates a bias in the estimation of counterfactual density hct at \u02dc zct x as Aggregation Bias inhct \u0000\u02dc zct x \u0001 = \u02c6 hct(\u02dc zct x )\u2212hct(\u02dc zct x ) = \u03b1L \u0002 hct(zct,L x )\u2212hct(\u02dc zct x ) \u0003 \u2212\u03b1S h hct(zct,S x )\u2212hct(\u02dc zct x ) i . The degree of the bias in the density estimation depends on three factors: (1) the slope of the counterfactual density hct (z), which determines the number of agents at zct,L x , \u02dc zct x and zct,S x under the counterfactual linear state; (2) the relative size of heterogeneous groups in the sample: \u03b1L and \u03b1S; (3) the degree of heterogeneity eL,eS, which determines \u2206z\u2217,S and \u2206z\u2217,L. When the slope of the counterfactual density hct (z) of the shifters is relatively small (i.e., hct(zct,L x ) \u2248hct(\u02dc zct x ) \u2248hct(zct,S x ) \u2200n \u2208shi fters), the bias in the density estimation can be ignored. In this scenario, the estimated average response f \u2206z\u2217is the weighted average of each heterogeneous group\u2019s response, with the relative share of each group as the weights, i.e., f \u2206z\u2217= \u03b1S\u2206z\u2217,S+\u03b1L\u2206z\u2217,L \u03b1S+\u03b1L . 26 Next, we consider the potential bias in the average policy effects from the heterogeneity in 26To see this, the excess bunching is composed of bunching agents from both L and S group: i.e., B = \u03b1L R z\u2217+\u2206z\u2217,L z\u2217 hct(z)dz+\u03b1S R z\u2217+\u2206z\u2217,S z\u2217 hct(z)dz. Given the estimated counterfactual density \u02c6 hct(z) and excess bunching \u02c6 B = B, we estimate the average response \u2206z\u2217using \u02c6 B = R z\u2217+g \u2206z\u2217 z\u2217 \u02c6 hct(z)dz. Thus, we have \u03b1S R z\u2217+g \u2206z\u2217 z\u2217+\u2206z\u2217,S hct(z)dz \u2212\u03b1L R z\u2217+\u2206z\u2217,L z\u2217+g \u2206z\u2217 hct(z)dz = 0. When hct (z) is approximately locally linear, the above equation can be approximated as: \u03b1S\u03b2 S \u0010 f \u2206z\u2217\u2212\u2206z\u2217,S\u0011 \u2212\u03b1L\u03b2 L \u0010 \u2206z\u2217,L \u2212f \u2206z\u2217 \u0011 = \u0010 \u03b1S\u03b2 S +\u03b1L\u03b2 L\u0011 f \u2206z\u2217\u2212 \u0010 \u03b1S\u03b2 S\u2206z\u2217,S +\u03b1L\u03b2 L\u2206z\u2217,L\u0011 = 0 where \u03b2 S = hct(z\u2217+\u2206z\u2217,S)+hct(z\u2217+g \u2206z\u2217) 2 and \u03b2 L = hct(z\u2217+\u2206z\u2217,L)+hct(z\u2217+f \u2206z \u2217) 2 . Hence, the estimated average response f \u2206z \u2217= \u03b1S\u03b2 S\u2206z\u2217,S+\u03b1L\u03b2 L\u2206z\u2217,L \u03b1S\u03b2 S+\u03b1L\u03b2 L . If the slope of the counterfactual density hct (z) to the left of z\u2217 is relatively small, \u03b2 S= \u03b2 L = hct(z\u2217+ f \u2206z\u2217), the estimated average response is the weighted average of each heterogeneous group\u2019s response, with the relative share of each group as the weights, i.e., 31 e. Note that a crucial step in our proposed estimation framework of the policy effects is to use the observed outcome distribution of shifters to estimate an auxiliary outcome distribution yr(z), which relies on the estimation of \u2206z\u2217. When there is heterogeneity in the structural parameter e and yet we ignore the heterogeneity by adjusting each shifter\u2019s location using the estimated average response level f \u2206z\u2217(i.e., zct = z \u0010 z\u2217+g \u2206z\u2217 z\u2217 ) \u0011 , there would be biases in the estimation of the auxiliary outcome distribution and policy impacts. Similarly, consider shifters with a value zx > z\u2217. Suppose we ignore heterogeneity in the parameter e and adjust shifters at zx to their counterfactual locations using the estimated average response f \u2206z\u2217, we have \u02dc zct x = zx \u0010 z\u2217+g \u2206z\u2217 z\u2217 \u0011 . At the point \u02dc zct x , the estimated auxiliary outcome distribution would be \\ E [yr(\u02dc zct x )] = y(\u02dc zct x z\u2217 z\u2217+ f \u2206z\u2217) = y(zx) = \u03b1LyL,r(zct,L x )+\u03b1SyS,r(zct,S x ) where yL,r(zct,L x ) denote the outcome under the kinked policy for shifters of group L whose counterfactual values are zct,L x = zx z\u2217+\u2206z\u2217,L z\u2217 ; and, vice versa for yS,r(zct,S x ). However, the true auxiliary outcome at \u02dc zct x should be E \u0002 yr|\u02dc zct x \u0003 = \u03b1LyL,r(zx z\u2217+\u2206z\u2217 z\u2217 )+\u03b1SyS,r(zx z\u2217+\u2206z\u2217,S z\u2217 ). Hence, using the average response f \u2206z\u2217generates the bias in the estimation of yr at \u02dc zct x as Aggregation Bias in \u0002 yr|\u02dc zct x \u0003 = \\ E [yr|\u02dc zct x ]\u2212E \u0002 yr|\u02dc zct x \u0003 = \u03b1L yL,r(zx z\u2217+\u2206z\u2217,L z\u2217 )\u2212yL,r(zx z\u2217+ f \u2206z\u2217 z\u2217 ) ! + \u03b1S yS,r(zx z\u2217+\u2206z\u2217,S z\u2217 )\u2212yS,r(zx z\u2217+ f \u2206z\u2217 z\u2217 ) ! . Similarly, the degree of bias in the outcome estimation depends on: (1) the slope of the auxiliary outcome distribution yr (z), which determines the values at zct,L x , \u02dc zct x and zct,S x ; (2) the share of heterogeneous groups \u03b1S and \u03b1L; (3) the degree of heterogeneity which determines \u2206z\u2217,L and f \u2206z\u2217= \u03b1S\u2206z\u2217,S+\u03b1L\u2206z\u2217,L \u03b1S+\u03b1L . 32 \u2206z\u2217,S; and (4) the bias in the counterfactual density estimation hct (z). When we have small aggregation biases in the estimation of counterfactual density distribution and the auxiliary outcome distribution holds a small slope, we can obtain a good approximation of the outcome distribution and of the average treatment effects. In addition, we propose another approach to address the aforementioned aggregation bias with an alternative identifying assumption. Specifically, consider a logarithm transformation of z, denoted as r \u2261ln(z). When the density distribution of x and the outcome distribution of y against r are linear,27 we can obtain unbiased estimates of counterfactual density and outcome distributions, and thus the unbiased estimate of average treatment effects in the presence of heterogeneity. The reasoning is as follows. With the logarithm transformation, each shifter\u2019s adjustment of r in response to the introduction of kinked policy becomes a constant, i.e., r\u2212rct = ln z\u2217 z\u2217+\u2206z\u2217, which leads to a parallel-rightward shift of the density curve for the region to the left of cutoff r\u2217\u2261lnz\u2217. 28In other words, the postkink density distribution has the same slope as the counterfactual one, but with different intercepts. We use the same illustrative example: two groups of agents at each level of r, with the shares and structural parameters being \u03b1L,\u03b1S and eL,eS, respectively. For shifters with a value rx > r\u2217, we have: h(rx) = \u03b1Lhct(rct,L x )+\u03b1Shct(rct,S x ) = \u03b1Lhct(rx \u2212\u2206r\u2217,L)+\u03b1Shct(rx \u2212\u2206r\u2217S) = hct(rx)\u2212dh dr \u00d7(\u03b1L\u2206r\u2217,L +\u03b1S\u2206r\u2217,S) Given that the amount \u2212dh dr \u00d7(\u03b1L\u2206r\u2217,L +\u03b1S\u2206r\u2217,S) is a constant, the counterfactual density distribution to the left of r\u2217is also a downward shift of the observed one. Hence, using the observed density distribution for shifters and for always-takers to fit a linear distribution and allowing an intercept change at the threshold r\u2217, we can still recover an unbiased counterfactual density dis27r is linear when z is exponentially distributed with a parameter within (0,1). In the data, variables often follow such a pattern under which there are more numbers of small values and only a few large values. One can plot the density of r \u2261lnz and check whether the density is close to linear in the estimation region. 28In terms of notations, we use r here to represent the equivalent terms of lnz. 33 tribution hct(r) in the presence of heterogeneity. In addition, based on the value of change at the threshold, we can recover the value of (\u03b1L\u2206r\u2217,L +\u03b1S\u2206r\u2217,S). Similarly, when the outcome distribution of y against r is linear, under the kinked policy the outcome distribution of shifters is a parallel shift of the counterfactual outcome distribution along the x-axis. We then have: y(rx) = \u03b1Lyr,L(rct,L x )+\u03b1Syr,L(rct,S x ) = \u03b1Lyr,L(rx \u2212\u2206r\u2217,L x )+\u03b1Syr,L(rx \u2212\u2206r\u2217,S x ) = yr(rx)\u2212dy dr \u00d7(\u03b1L\u2206r\u2217,L x +\u03b1S\u2206r\u2217,S x ) where dy dr is the slope of outcome distribution; and \u2212dy dr \u00d7 (\u03b1L\u2206r\u2217,L x + \u03b1S\u2206r\u2217,S x ) is the constant amount of outcome distribution shift for shifters. Given the observed outcome distribution for always-takers, we can recover dy dr. Given the estimated value of (\u03b1L\u2206r\u2217,L x + \u03b1S\u2206r\u2217,S x ) from the density distributions, we can obtain the value of dy dr \u00d7(\u03b1L\u2206r\u2217,L x +\u03b1S\u2206r\u2217,S x ), which allows us to obtain an unbiased estimate of the auxiliary outcome distribution for shifters. Combing the observed outcome distribution of always-takers and the auxiliary outcome distribution of shifters, we can follow the procedures in the main analysis to calibrate the structural parameters (\u00b5,\u03bb) and estimate the treatment effects. These estimates are unbiased because the auxiliary outcome distribution is unbiased. Meanwhile, the corresponding identifying assumptions of the density distribution of of r \u2261lnz and the outcome distribution being linear are testable by directly examining the distribution figures. To sum up, in the presence of heterogeneity in e, here are several potential solutions according to the following scenarios: 1. If the dimensions of heterogeneity are known, we can split the whole sample into subsamples according to these determinants, as conducted in Best et al. (2015b). This allows the estimation within subsamples with relatively homogeneous preferences. 2. If density distribution h0(z) has a small slope and with small group heterogeneity, we can obtain a good approximation of the average bunching response and achieve small aggregation bias in the estimation of counterfactual density distribution. Furthermore, if the outcome distribution also holds a small slope, we can obtain a good approximation of the average 34 treatment effects. 3. If the density and outcome distribution of the logarithm transformation of z is linear, we can obtain an unbiased estimation of the counterfactual density distribution and the auxiliary outcome distribution as well as the average treatment effects. 4.4 Relabelling Faced with monetary incentives, agents may engage in misreporting or other relabelling behavior, causing their reported value of z to be potentially different from the real response. For example, in the study of tax incidence on R&D investment, Chen et al. (2021) point out that relabelling is an important channel in which firms adjust their R&D expenditure upwards, to benefit from tax reduction. To investigate whether and how relabeling may affect our causal analysis, we extend our proposed estimation approach to incorporate the relabelling behavior. Agents\u2019 optimal degree of relabelling is determined by the marginal cost (e.g., cost of cooking the books and potential risk of being caught, related to the cost function) and the marginal benefit of it (e.g., tax saving, related to the policy). We first consider a setting where agents share the same cost function and then extend our framework to a more general situation where different groups of agents may hold different cost functions (e.g. it is easier for self-employed to misreport their income than wage-earners). Specifically, we assume that relabeling cost depends on the absolute value and the relative degree of relabeling following Chen et al. (2021). That is, we assume c \u00d7 zrl \u00d7 g(\u03b4), where c is a fixed parameter; \u03b4 \u2261zrl\u2212zrp zrl summarizes the relabeling behavior by the agents; zrp,zrl are the reported and real values of z respectively; and g\u2032(\u03b4) > 0,g\u2032\u2032(\u03b4) > 0,g(0) = 0. Hence, the marginal cost of an additional degree of relabelling is czrlg\u2032(\u03b4). Consider the counterfactual linear policy with a low tax/co-payment rate at t. The benefit of relabelling is the money saved, i.e. (zrl,ct n \u2212zrp,ct n )t \u2261\u03b4 ct n zrl,ct n t, therefore, the marginal benefit of an additional degree of relabelling is zrl,ct n t. Recall the marginal cost of an additional degree of relabelling is czrl,ctg\u2032(\u03b4 ct n ). Agent n optimally chooses his/her degree of relabelling \u03b4 ct n by setting marginal benefit equalizing marginal cost, i.e., g\u2032(\u03b4 ct n ) = t c, which implies \u03b4 ct n = g\u2032\u22121( t c). Note g\u2032\u22121( t c) is constant for all agents, therefore, we can rewrite it as \u03b4 ct = g\u2032\u22121( t c),\u2200n. Next, consider the kinked policy, which sets a higher tax rate at t +\u2206t if zn > z\u2217and leaves the tax/co-payment rate unchanged at t if zn \u2264z\u2217. Similar to the analysis in Section 2, the introduction 35 of the kinked policy divides agents into three groups. First, agents with zrp,ct n \u2264z\u2217(i.e., alwaystakers) face no change in marginal incentives and set zrp,1 n = zrp,ct i \u2264z\u2217and \u03b4 = \u03b4 ct = g\u2032\u22121( t c). Second, agents with zrp,ct n > z\u2217+ \u2206z\u2217(i.e., shifters) face a change in the marginal benefit and adjust their optimal responses accordingly. Specifically, all shifters set \u03b4 = g\u2032\u22121(t+\u2206t c ). Also, shifters change their reported value of z by a constant percentage with zrp n zrp,ct n = z\u2217 z\u2217+\u2206z\u2217, where \u2206z\u2217 denotes the response in the reported value of z by the marginal bunching agent. Third, agents with zrp,ct n \u2208(z\u2217,z\u2217+\u2206z\u2217] (i.e., bunchers) also face a change in their marginal incentives but are subject to a corner solution. However, these agents bunch at the cutoff zrp n = z\u2217, and choose different optimal degrees of relabelling \u03b4n, depending on how far away their counterfactual value zrp,ct n is from the cutoff z\u2217. Detailed proofs are shown in Appendix D??. To summarize, the optimal reported value zrp n , the optimal degree of relabeling \u03b4n and the optimal real value zrl n under the kinked policy are given as zrp n = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 zrp,ct n if zrp,ct n \u2264z\u2217 z\u2217 if zrp,ct n \u2208(z\u2217,z\u2217+\u2206z\u2217] zrp,ct n z\u2217 z\u2217+\u2206z\u2217 if zrp,ct n > z\u2217+\u2206z\u2217 (22) \u03b4n = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 g\u2032\u22121( t c) if zrp,ct n \u2264z\u2217 \u00000,g\u2032\u22121(t+\u2206t c ) \u0003 if zrp,ct n \u2208(z\u2217,z\u2217+\u2206z\u2217] g\u2032\u22121(t+\u2206t c ) if zrp,ct n > z\u2217+\u2206z\u2217 (23) zrl n = zrp n 1 (1\u2212\u03b4n) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 zrl,ct n if zrp,ct n \u2264z\u2217 z\u2217 1 (1\u2212\u03b4n) if zrp,ct i \u2208(z\u2217,z\u2217+\u2206z\u2217] z\u2217 z\u2217+\u2206z\u2217 1\u2212g\u2032\u22121( t c) 1\u2212g\u2032\u22121(t+\u2206t c )zrl,ct n if zrp,ct n > z\u2217+\u2206z\u2217 (24) The estimation of the treatment effect crucially depends on inferring the counterfactual den36 sity and outcome distributions, i.e., hct(z) and yct(z). Since all shifters adjust their reported (observed) value of z by the same percentage, we can apply the same algorithm as in the baseline analysis to recover the counterfactual density distribution of the reported z and the reported marginal buncher\u2019s response. Hence, hct(z), \u2206z\u2217is unbiasedly estimated. Therefore, we can locate the agents back to their counterfactual locations and compare the same agents under the kinked policy and the counterfactual policy, giving us unbiased estimates of treatment effects on shifters and bunchers.29 However, even though the estimation procedures on marginal bunching response and the treatment effects on shifters and bunchers are still correct under potential relabelling or misreporting, given that the real responses are smaller, we would anticipate a reduction in the magnitude of the impacts. Redefine \u00b5 \u2261 \u2206y \u2206zrl/zrl . Mathematically, the reasons are as follows: \u03c4TE,shi fter y = E[yn \u2212yct n |n \u2208shifters] = \u00b5 \u0012 z\u2217 z\u2217+\u2206z\u2217 1\u2212g\u2032\u22121( t c) 1\u2212g\u2032\u22121(t+\u2206t c ) \u22121 \u0013 \u2212\u03bbE(zrp,ct n ) \u0012 (t +\u2206t) z\u2217 z\u2217+\u2206z\u2217\u2212t \u0013 +\u03bb\u2206t \u00d7z\u2217 The last equality is based on the assumption that agents have the same preferences (and parameters). Recall we use the slope and level changes at the threshold z\u2217when comparing the observed outcome of always-takers and the obtained auxiliary outcome of shifters (when being located back to the counterfactual locations). Accordingly, the equations for calibrating our structural parameters \u00b5,\u03bb would change. Specifically, Equation (10) for the slope change would remain the same, but Equation (9) for the level change would be: Level Change at z\u2217 = \u00b5n \u0012 z\u2217 z\u2217+\u2206z\u2217 1\u2212g\u2032\u22121( t c) 1\u2212g\u2032\u22121(t+\u2206t c ) \u22121 \u0013 (25) \u2212 \u03bb(t +\u2206t)z\u2217 \u0012 z\u2217 z\u2217+\u2206z\u2217\u22121 \u0013 (26) Under potential relabelling, to calibrate parameters \u00b5,\u03bb, we need an additional moment to identify 29Note estimating counterfactual outcome distribution is based on the observed distribution of always-takers and the extrapolation via assumptions on smooth counterfactual outcome distribution. It does not require information on observed outcome distribution of shifters. Therefore, it is also unbiased. 37 the relabelling cost parameter c. One possibility is to exploit variations in changes in the marginal incentives across different thresholds. While the previous analysis assumes that all agents share the same cost function of relabelling (i.e., c \u00d7 zrl \u00d7 g(\u03b4), where c is constant for all agents), it could be possible in reality that relabeling cost functions are different across agents due to differential predetermined characteristics. If we know how to classify agents into subgroups with the homogeneous cost function of relabelling within each subgroup, we can then analyze by subgroups, and there is no bias for each subgroup estimation. However, without the knowledge of which agents belong to which group, we have to conduct the analysis using the full sample. In this scenario, the estimated average response level f \u2206z\u2217based on one empirical moment B contains the potential aggregation bias, which leads to the same aggregation bias as discussed in Subsection 4.3 on the heterogeneity in response elasticity. The solution to the situation with heterogeneous relabeling costs across agents is the same as the solutions to the heterogeneous preference in Subsection 4.3. 4.5 Diffusion In our analyses mentioned above, we consider diffusion behavior for bunchers; that is, there is no sharp bunching exactly at the kink point as bunching agents cannot target at the kink point precisely. One may then be concerned whether diffusion behavior also happens for other agents. Specifically, shifters adjust their values of z when a kinked policy is introduced, and may not target precisely as well.30 Whether and how the diffusion by shifters biases our estimated counterfactual density distribution and then the causal estimates? In this subsection, we discuss sources of potential biases in the estimated counterfactual density distribution, excess bunching, the counterfactual outcome distribution, and the treatment effect, when there is diffusion for both shifters and bunchers.31 Denote the observed effort choice as z and the optimal targeted effort choice as ztargetted, with zi = ztargetted i +\u03b5i, where \u03b5i denote the degree of diffusion for agent i. Hence, \u03b5i > 0 indicates overshooting behavior, \u03b5i < 0 indicates undershooting behavior, and \u03b5i = 0 suggests precise targeting. We start with the case that the degree of diffusion for each shifter is a random draw from a 30Note there are over-shooting and under-shooting at each point of z; therefore, we may not observe excess bunching in the shifter\u2019s distribution even if shifters are subject to diffusion. 31The general practice to deal with only the diffusion bunching is to treat the overall amount of excess bunching around the kink point as the policy response (e.g., Saez, 2010). 38 common i.i.d. distribution, and then investigate the case that different groups of agents draw their diffusion degrees from different i.i.d. distributions. Under the first scenario, the degree of diffusion for each shifter is a random draw from the same distribution g(\u03b5) with the mean value being \u00b5(\u03b5) = 0 and variance being Var(\u03b5) = \u03c32. The observed density distribution can be written as h(z) = R \u03b5 hct\u0000(z \u2212\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 g(\u03b5)d\u03b5, where hct() denotes the counterfactual density distribution and z\u2217 z\u2217+\u2206z\u2217denotes the marginal buncher\u2019s relative change in z under the kinked policy. Hence, bias arises from diffusion as hct\u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 \u0338= hct\u0000zz\u2217+\u2206z\u2217 z\u2217 \u0001 . Specifically, the bias in the estimated counterfactual density is small when Var(\u03b5i) is small (i.e., less degree of diffusion) or when the slope of hct() is small. To illustrate this point, consider a special situation where \u03b5i = 0 with 60% probability, \u03b5i = \u03b5 with 20% probability, and \u03b5i = \u2212\u03b5 with 20% probability. Then, we have h(z) = hct\u0000zz\u2217+\u2206z\u2217 z\u2217 \u0001 \u221760%+hct\u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 \u221720%+hct\u0000(z+\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 \u221720%. If hct\u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 \u2248hct\u0000(z+\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 \u2248hct\u0000zz\u2217+\u2206z\u2217 z\u2217 \u0001 , there is no bias even if we ignore shifters\u2019 diffusion (i.e., by assuming h(z) = hct\u0000zz\u2217+\u2206z\u2217 z\u2217 \u0001 ). Hence, when Var(\u03b5i) is small or the slope of h0() is small, we have less bias when ignoring shifters\u2019 diffusion. Similarly, the outcome distribution is y(z) = R \u03b5 yr\u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 g(\u03b5)d\u03b5, where yr() denotes the auxiliary outcome distribution (by locating shifters back to their counterfactual locations). The bias from diffusion is due to that yr\u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 z\u2217 \u0001 \u0338= yr\u0000zz\u2217+\u2206z\u2217 z\u2217 \u0001 . When Var(\u03b5i) is small (i.e., less degree of diffusion) or when the slope of yr() is small, the bias in the estimated auxiliary outcome distribution (when ignoring the diffusion) is small. Note that the treatment effects on shifters and bunchers depend on the estimation of the counterfactual density, the auxiliary outcome distribution, and the counterfactual outcome distribution. Hence, the bias from diffusion is small when (i) Var(\u03b5i) is small or (ii) when the slopes of hct and yr are small. In practice, one can check the diffusion variance by exploring the diffusion pattern around the cutoff, check the slope of hct by exploring the slope of the density for always-takers, and check the slope of the auxiliary outcome distribution yr of shifters to understand the potential degree of bias. Then, we consider a more general setting in which different groups of agents randomly draw their degree of diffusion from different distributions. For example, the self-employed are better at targeting their annual income at the cutoff than the wage-earners. Specifically, assume there are M groups of agents at each value of z in the counterfactual state (i.e., the linear policy), with the share of each group denoted as \u03b1m. Each agent i belonging to group m randomly draws his/her degree 39 of diffusion \u03b5i from the density distribution gm(\u03b5), with mean value \u00b5m(\u03b5) = 0 and variance as Varm(\u03b5) = \u03c32 m. The observed density distribution is shown as h(z) = \u2211m \u03b1m R \u03b5 hct\u0000(z \u2212\u03b5)z\u2217+\u2206z\u2217 m z\u2217 \u0001 gm(\u03b5)d\u03b5. The bias from diffusion is due to two reasons: first, hct\u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 m z\u2217 \u0001 \u0338= hct\u0000zz\u2217+\u2206z\u2217 m z\u2217 \u0001 ; and second, \u2211m \u03b1mhct\u0000zz\u2217+\u2206z\u2217 m z\u2217 \u0001 \u0338= hct\u0000zz\u2217+\u2206z z\u2217 \u0001 , where \u2206z is the estimated marginal buncher\u2019s response when ignoring preference heterogeneity. That is, biases come from neglecting the shifters\u2019 diffusion and neglecting the heterogeneity in the structural parameter. When \u2211m \u03b1m\u03c3m is small (i.e., the average dispersion in the degree of diffusion is small) and when the slope of hct() is small, we are back to the scenario with preference heterogeneity discussed in Section 4.3. The observed outcome distribution is y(z) = 1 \u2211m \u03b1m \u2211m \u03b1m R \u03b5 yr m \u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 m z\u2217 \u0001 gm(\u03b5)d\u03b5, where yr m() denote the auxiliary outcome distribution of group m. similarly, the bias from diffusion is generated due to two reasons: first, yr m \u0000(z\u2212\u03b5)z\u2217+\u2206z\u2217 m z\u2217 \u0001 \u0338= yr\u0000zz\u2217+\u2206z\u2217 m z\u2217 \u0001 ; second, 1 \u2211m \u03b1m \u2211m \u03b1myr m \u0000zz\u2217+\u2206z\u2217 m z\u2217 \u0001 \u0338= yr\u0000zz\u2217+\u2206z\u2217 z\u2217 \u0001 . Therefore, when \u2211m \u03b1m\u03c3m is small and when the slope of yr m() is small, we are back to the scenario with heterogeneity, which is discussed in Subsection 4.3. 4.6 Alternative Counterfactual Policy: linear high tax/co-payment rate Our baseline analysis assumes that the counterfactual policy is a linear low tax/co-payment rate, i.e., the same as the policy below the cutoff. As agents with values above the cutoff face a higher marginal tax/co-payment rate under the kinked policy, they will reduce their value, leading to a \u201cbunching down\u201d design. Meanwhile, agents with values below the above face the same marginal incentive and pay the same amount of fees under the kinked policy. They are denoted as alwaystakers. Accordingly, we have proposed an estimator for quantifying the impact of the kinked policy on these agents: bunchers and shifters Alternatively, we might consider an alternative counterfactual policy with a linear high tax/co-payment rate (t +\u2206t). Compared to this new counterfactual policy, agents below the cutoff face a lower tax/co-payment under the kinked policy and hence adjust their values of z upwards, leading to a \u201cbunching up\u201d design. Meanwhile, as agents above the cutoff face the same marginal incentive under the kinked policy, they won\u2019t change their values of z. We denote them as nevertakers. However, in terms of outcomes, because never-takers do receive a lump-sum transfer under the kinked policy (compared to the new counterfactual policy)32, their outcome values might 32Denote the new counterfactual policy as T ct,new(z) = (t + \u2206t)z. For agents above the cutoff, 40 change. This composes the key difference for analyzing the policy impacts under \u201cbunching up\u201d and \u201cbunching down\u201d designs. Specifically, we cannot take the observed outcome distribution of never-takers as the new counterfactual outcome distribution; instead, we need to adjust the impact from the lump-sum transfer. This is doable because we have a parameter \u03bb which captures the impact of money T on outcome y and we also know how large the money change is (i.e., \u2206t \u00d7 z\u2217). Therefore, with modifications, we can still use the level change and the slope change at the cutoff z\u2217between the observed outcome distribution of never-takers and the auxiliary outcome distribution of shifters to calibrate the parameters \u00b5,\u03bb and estimate the policy impacts (after addressing the impact from lump-sum transfer). Details are shown in Appendix B. One thing to note is that, in the \u201cbunching up\u201d setup, we assume that the impact from the lump-sum transfer shares the same parameter \u03bb. This assumption is more likely to be valid when the level change \u2206t \u00d7z\u2217is relatively small. 5 Application: Coinsurance Policy In China We apply our aforementioned bunching technique to identify the causal impacts of the coinsurance policy on the patients\u2019 outpatient behaviors in China. Specifically, we first introduce the healthcare system in China and the medical claim data for our empirical analysis. Next, we present the bunching evidence to examine patients\u2019 responses to the coinsurance policy. Then, we apply our causal inference framework to study the policy effect. 5.1 Healthcare System in China China established the current health insurance system since the late 1990s, and gradually achieved universal health insurance coverage. The Urban Employee Basic Medical Insurance (UEBMI) was first introduced in 1998, covering formal sector workers in the urban area. This was followed by the gradual introduction of the New Cooperative Medical Scheme (NRCMS) during the period of 2003-2008 targeting the rural population, and then the Urban Resident Basic Medical Insurance (URBMI) launched in 2007 targeting urban residents who were not covered by the UEBMI (i.e., the unemployed, children, students and the disabled in urban areas). Starting in 2010, the Chinese under the kinked policy, we have T(z) = (t +\u2206t)z\u2212\u2206tz\u2217,\u2200z > z\u2217. Therefore, the lump-sum transfer between the kinked policy and the new counterfactual policy is T ct,new(z)\u2212T(z) = \u2206tz\u2217,\u2200z > z\u2217. 41 government gradually integrated NRCMS and URBMI and established a unified Urban and Rural Residents Basic Medical Insurance Scheme (URRBMI) to bridge the gap in medical care between rural residents and urban residents who are not working. These basic health insurance programs (i.e., URRBMI and UEBMI) expanded at a remarkable pace, covering more than 92% of the urban population and 97% of the rural population in 2011 in China (Yu, 2015).33 The benefits depend on the medical insurance catalogs and the program\u2019s cost-sharing design. Specifically, the medical insurance catalogs specify the payment scopes and prices of drugs, items of diagnosis treatment, and standards of medical service facility, which are the same for both UEBMI and URRBMI. The cost-sharing design consists of the deductibles, copayment rates (\u03c4), and the maximum amounts payable (z\u2217(1 \u2212\u03c4)), which are designed separately for outpatient and inpatient care, vary across different tiers of hospitals and are different under UEBMI and URRBMI schemes. Specifically, the insurance benefits (Benefits) and hence the annual out-of-pocket expenses (Out-of-Pocket) under the insurance scheme are shown as: Benefits = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z\u00d7(1\u2212\u03c4) if z \u2264z\u2217 z\u2217\u00d7(1\u2212\u03c4) if z > z\u2217 (27) Out-of-Pocket = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z\u00d7\u03c4 if z \u2264z\u2217 z\u00d71\u2212z\u2217\u00d7(1\u2212\u03c4) if z > z\u2217 (28) where z denotes the annual medical expenses eligible for insurance coverage (i.e., annual medical expenses within the medical insurance catalog with the total deductibles subtracted); z\u2217denotes a statutory cutoff; \u03c4 denotes the co-payment rate (hence 1\u2212\u03c4 denotes the reimbursement rate when z < z\u2217). The values of z\u2217and \u03c4 depend on the insurance schemes, with a lower reimbursement rate (hence a larger copayment rate \u03c4) and a lower maximum amount payable (i.e., a smaller threshold z\u2217) under the URRBMI, compared to the UEBMI. 33The premiums for UEBMI are usually determined by the employee\u2019s average monthly wages in the previous year and are jointly borne by the employer and the employees concerned. It is usually 2% of the salary for employees and 6% of the salary for employers. As for URRBMI, a large portion of the premiums are subsidized by the government, with enrollees contributing a small part. 42 5.2 Data and Analysis Sample Our empirical analysis draws on a dataset covering the universe of visit-level outpatient medical claims in a city in the eastern part of China of all the enrollees under the city\u2019s public health insurance programs in 2011 and 2012. There were around 26 million residents (99% of urban unemployed residents and 100% of rural residents who are not employees) enrolled under the URRBMI and around 21 million (98.6% of urban employees ) enrolled under the UEBMI. Our medical claim data contain approximately 19 million outpatient visits in 2011 and more than 21 million outpatient visits in 2012. For each visit, the data provide detailed information regarding expenditures on the drugs, diagnosis, and treatment, the type of insurance, the eligible expenditure, and patient ID. For each patient, we aggregate the visit-level medical expenditure data to the annual level to obtain annual eligible expenditures and the total number of visits in a year. The cutoff of annual reimbursement (z\u2217) under the URRBMI was 600 RMB in 2011 and 800 RMB in 2012, respectively, and the reimbursement rate (\u03b4) is 50% at the Tier 1 community health services institutions and 40% at the Tier 2 and 3 hospitals. By contrast, the upper bound of annual reimbursement (z\u2217) under the UEBMI is much higher: at 2500, 3000, 3500, 4500 RMB in 2011 (depending on whether the patient is on-the-job or retired and whether the disease is chronic or not) and at 3500, 4000, 4500, 5500 RMB in 2012. The reimbursement rate (\u03b4) under the UEBMI is also higher: 70% for on-the-job workers and 85% for retired workers. Details of the medical insurance plan are shown in Table 1. Given the policy complexity in the UEBMI, we focus our empirical analysis on the sample of the URRBMI which contains one policy threshold each year, and use the sample of the UEBMI for placebo analyses to support our empirical identification. [Insert Table 1 Here] 5.3 Bunching Evidence To examine whether patients respond to the medical expenses deduction limits, we first plot the density distribution of annual eligible expenses (z) for patients under the URRBMI. Results are shown in Figure 4a for 2011 and Figure 4b for 2012, respectively. There is a clear bunching at the policy threshold for both figures; that is, a significant and sharp bunching mass at 600 for 2011 and 800 for 2012. These results suggest that consistent with our theoretical analysis in Section 2, 43 patients comply with the kinked policy by optimally choosing their medical consumption. Meanwhile, the fact that there is no excess bunching at the 2011 threshold of 600 in 2012 indicates that patients incur low adjustment costs when the threshold changes, relieving the concern of stayers. To alleviate the concern that bunching at the policy thresholds in Figures 4a-4b may be spurious due to other confounding factors, we repeat the analysis for the sample of patients under the UEBMI in Figure A1 in Appendix A. Given that the policy thresholds of the UEBMI were much higher than those of the URRBMI, we should not expect any bunching mass at the policy thresholds of the URRBMI. Indeed, we do not spot any bunching behavior at 600 in 2011 and at 800 in 2012. These findings lend support to our argument that patients indeed adjust their medical expenses in response to the kinked reimbursement policy. Another common concern related to bunching analyses is whether the adjustment is real or just a relabelling behavior, which may generate estimation complexity and potential biases as illustrated in Section 4.4. For example, studies detect a certain share of bunching response due to relabelling in settings where agents self-report the values (e.g., Saez, 2010; Chen et al. 2021). However, in our setting, the eligible medical expenses are not self-reported. Instead, the numbers are aggregated from visit-level medical transactions, and hence, relabeling or misreporting is very unlikely in this setup. [Insert Figure 4 Here] 5.4 Causal Impacts on Patient Behavior We have shown that patients adjust their eligible expenses to take advantage of the kinked reimbursement scheme, resulting in excess bunching at the threshold of the reimbursement limit. We now explore the potential impacts of such adjustments on patients\u2019 outpatient behaviors. 5.4.1 Stylized Facts Before a formal estimation of the causal impacts, we first plot the raw relation between the eligible annual medical expenses (z) and medical outcomes (y) to gain some direct evidence. We consider whether a change in co-payment rate could affect patients\u2019 choice of outpatient visits. Figure 5 reports the total number of outpatients at each bin level of eligible annual expenses for outpatients under the URRBMI. Green triangles represent the distribution for 2011 and blue squares represent 44 the distribution for 2012, where sizes of triangles and squares are proportional to the sample size in each bin for the corresponding group. Let us first study the distribution of 2012, which is represented by the blue rectangles. To the left of the policy threshold, there is a clear upward relation between the total number of outpatient visits and the eligible annual expenses. This trend carries on to the policy threshold and is then followed by a significant drop in the total number of visits which becomes much flat afterward. Meanwhile, overall the total number of visits to the left of the policy threshold is larger than those to the right of the policy threshold. These results provide direct visual support to our theoretical analysis in Section 2: compared to patients located to the left of the threshold (with a marginal copayment rate of \u03c4), patients to the right of the threshold respond to the increase in copayment rate (of 100%) by paying fewer outpatient visits. The distribution in 2011 shows a similar pattern as those in 2012, in which values to the left of the policy threshold are generally larger than those to the right of the policy threshold. These further lend support to our theoretical framework elaborated in Section 2 that changes in the co-payment rate significantly impacted patients\u2019 decisions for outpatient visits. Combining the distributions of 2011 and 2012, it is interesting to note that jumps only happen at the corresponding policy thresholds. Specifically, there is no clear jump at 600 in the 2012 distribution when the policy threshold was at 800; and vice, versa. These results are consistent with the bunching behavior in Figures 4, which further confirm that induced manipulation behavior and its impacts were caused by the kinked policy. To further alleviate the concern of spurious responses due to other factors, we examine the distributions of 2011 and 2012 for patients under the UEBMI. As the policy thresholds under the UEBMI were much higher than those under the URRBMI in both 2011 and 2012, we should not expect any significant behavioral changes around 600 or 800. As shown in Figure A2 in Appendix A, we find smooth relations between the total number of outpatient visits and eligible expenses throughout the whole region in both years, with no systematic changes below and at the placebo policy thresholds. These results lend further support to the argument that patients adjust their number of outpatient visits in response to the kinked medical insurance plan. [Insert Figure 5 Here] 45 5.4.2 Counterfactual Density Distribution A crucial element in formally estimating the magnitude of policy impact is the counterfactual density distribution; that is, the density distribution under the counterfactual linear scheme with the low co-payment rate. To this end, we estimate the counterfactual density distribution following our proposed method in Section 3.1 and compare it with the commonly used approach by Chetty et al. (2011).34 Figure 6a shows the observed density distribution h() and our estimated counterfactual density distribution hct() based on outpatients under the URRBMI in 201235. Specifically, the solid green curve represents the observed density distribution, and the dashed red curves represent the estimated counterfactual density distribution. Meanwhile, the solid vertical line indicates the policy threshold (annual reimbursement limit z\u2217), the long vertical dashed line in the upper part of the density distributions shows the estimated marginal buncher\u2019s response \u2206z\u2217, and two short dashed vertical lines around the threshold specify the diffuse range that is visually determined. [Insert Figure 6 Here] Three groups of patients under the kinked reimbursement policy are clearly shown in the figures: (a) always-takers with counterfactual expenses zct \u2264z\u2217remain unchanged with z = zct and located to the left of the threshold; (b) bunchers with counterfactual expenses zct \u2208(z\u2217,z\u2217+ \u2206z\u2217] adjust their expenses downwards and bunch at the threshold, i.e., z = z\u2217; (a) shifters with counterfactual expenses zct > z\u2217+ \u2206z\u2217reduce their expenses to z = zct \u00d7 z\u2217 z\u2217+\u2206z\u2217, resulting in leftwards shifting in the counterfactual density distribution and located to the right of the threshold z\u2217. In magnitude, the estimated marginal buncher\u2019s response \u2206z\u2217is 260 RMB and significant at the 1% level. This number indicates that the counterfactual values of annual eligible expenses are around 1.3 times the observed values under the kinked policy for the marginal bunching agents and the shifting agents (i.e., zct z = z\u2217+\u2206z\u2217 z\u2217 = 132.5%). It is worth noting that the counterfactual density distribution to the left of the policy threshold is not an upward parallel shifting of the observed density distribution. This is because patients with different counterfactual expenses shift leftwards with different magnitudes in response to the kinked policy as elaborated in Section 2, resulting in the counterfactual and observed density 34We control for the reference points effect using the method in Section 4.1. 35The results remain similar if we use the 2011 sample, as shown in Figures 4 & 5 earlier. For illustration purposes, we focus on the 2012 sample hereafter. 46 distribution having different shapes in the region to the left of the threshold. These are in contract with the assumption under the estimation framework by Chetty et al. (2013). Specifically, by assuming an upward parallel shift from the observed to the counterfactual density distributions, estimates following Chetty et al. (2011) end up overestimating the marginal buncher\u2019s response. As shown in Figure 6b, the estimated marginal buncher\u2019s response \u2206z\u2217is 400 RMB, which is larger than our estimates of 260 RMB. 5.4.3 Estimation of Policy Impacts and Structural Parameters We now use the causal estimation framework proposed in Section 3.2 to quantify the impact of the kinked medical reimbursement scheme on outpatient behaviors for shifting and bunching patients separately. Figure 7 plots the empirical results using the 2012 outpatient data with the policy threshold at 800. Consistent with the outlay in Figure 6, the solid vertical line indicates the policy threshold, the long vertical dashed line in the upper part of the distribution shows the estimated marginal buncher\u2019s response \u2206z\u2217, and two short dashed vertical lines around the threshold specify the diffuse range. Green dots present the observed distribution of the total number of outpatient visits annually (in logarithm) (y) against eligible annual expenses (z). Blue dots represent the auxiliary outcome distribution for shifting patients when we locate shifters back to their counterfactual location of z. Following Equation (20) in section 3.2, we obtain the counterfactual outcome distribution (represented by the red dashed curve) and calibrate the structural parameters (\u00b5,\u03bb) as shown in columns 1 & 2 of Table 2. It indicates that when eligible annual expenses z increase by 1%, the number of outpatient visits annually increase by 14.384, significant at 1% level, capturing the direct impact from changes in z; meanwhile, when the annual out-of-pocket increases by 100 RMB (as a result of changes in z), the number of outpatient visits annually increase by 1.9, significant at 1% level, capturing the indirect impact from changes in z. As discussed in section 2, the introduction of the kinked policy leads to a reduction in z for shifting patients and bunching patients, given that the estimated values \u02c6 \u00b5 > 0, \u02c6 \u03bb > 0, therefore, we would anticipate a negative effect on the number of outpatient visits annually for both shifters and bunchers. To verify this, we can compare the observed outcome distribution (green dots) and the counterfactual outcome distribution (red dashed line) in Figure 7. There are significant decreases in the number of outpatient visits for shifting agents (those with zct \u2208(1060,1275) and z \u2208(800,980)) and a substantial decrease for bunching agents(those with zct \u2208(800,1060] and z = z\u2217). In terms 47 of the economic magnitude, column 3 of Table 2 shows that the policy effect for shifting agents is -2.110, significant at 1%, implying that the kinked medical insurance policy causes shifting agents to pay around two times fewer outpatient visits, compared to the counterfactual linear policy with a low co-payment rate. The estimation results for bunchers are shown in column 4 of Table 2. We find a negative average treatment effect on bunching patients as well, although the magnitude is smaller because bunching patients encounter a smaller reduction in z compared to shifting patients. [Insert Table 2, Figure 7 Here] 5.4.4 Heterogeneous Impacts Patients in different age groups may respond heterogeneously to the kinked reimbursement scheme. We next split the full sample into three subgroups based on patients\u2019 age at the time of treatment and explore the heterogeneous impacts. Figure 8 compares the degree of bunching for the three subgroups: children (patients aged under age 15), middle-aged adults (patients aged between 16 and 54), and elders (patients aged above 55). We find excess bunching in all three subgroups, indicating that patients indeed adjust their eligible expenses as a response to the kinked medical insurance scheme. In addition, we find similar level of excess bunching for all age groups,with the marginal buncher\u2019s response at 260 RMB. Then, we move on to the policy impact on the number of outpatient visits annually. Figure 9 shows the observed and counterfactual outcome distributions for each subgroup. Table 3 shows the calibrated parameters and the estimated policy impact for each subgroup. bunching patients and shifting patients of all age groups have decreased their number of outpatient visits when the co-payment rate decreased due to the kinked policy. The impact is slightly larger on patients aged between 16 to 54, compared to other groups. The consistency in results indicates that financial incentives matter for patients\u2019 outpatient behaviors across all age groups. One thing to note is that the estimated causal impact on the full sample is close to the weighted average of the causal impacts on these subgroups. This is consistent with our discussion in section 4.4 that when the heterogeneity in bunching response is relatively small, there is very limited bias when locating shifting agents back to their original location under the homogeneous parameter assumption. Therefore, the average bunching response, average calibrated structural 48 parameters, and the average causal effects under the homogeneous approach are a close approximation to the average estimates of each subgroup when heterogeneity is taken into consideration. 5.5 Alternative Policies: Changes in Thresholds or Co-payment Rates Given the calibrated value of structural parameters (\u00b5,\u03bb,e) and our understanding of patients\u2019 behavior under kinked policies, we can study the impact of alternative policy designs, by varying the location of the kink and by changing the difference in co-payment rates below and above the threshold. These analyses could shed light on policy designs by exploring questions like these: fixing the overall cost of medical insurance, what kind of policy design generates the highest outcomes for the overall population? Further, who benefits from such a policy? Our approach allows us to conduct certain welfare analyses using a reduced-form approach, however, we do note that the analysis rules out potential changes in price due to general equilibrium effects (e.g., changes in patient behavior might affect the price of seeing the doctors). As an illustration example, we analyze the impact of increasing the cutoff z\u2217from 600 RMB to 800 RMB on the medical insurance burden and the overall number of outpatient visits. When the cutoff increases, patients in the middle of the distribution of annual eligible expenses (z) would see a surge in outpatient visits due to the reduction in the co-payment rate, while other patients remain the same. This is consistent with our findings in Panel A of Table 4, where the overall number of outpatient visits and insurance burden increase as the cutoff moves rightwards. Our current policy imposes a 100% co-payment rate once the expenses exceed the cutoff (i.e., z > z\u2217). If we are willing to reduce the co-payment rate for z > z\u2217, to maintain a constant insurance burden, we need to reduce the threshold by imposing more patients on the higher marginal copayment scheme. Which policy is better, a higher cutoff with a larger jump in copayment rate at the cutoff, or, a lower cutoff with a smaller jump in copayment rate at the cutoff? This question would be more of interest if there are relabelling or misreporting 36 In our setup, there is no misreporting. Here we study the distributional impact, measured by the dispersion of patient outcomes. If we are imposing a high cutoff, high jump of copayment rate across the cutoff (denoted as Policy I), then, a small group of patients with high medical demand would see a big reduction in outpatient visits. By contrast, if we are imposing a low cutoff, small jump of copayment rate across the cutoff 36Chen et al. (2021) pointed out that when there is misreporting, a larger threshold is better at stimulating R&D expenses using tax incentives, as the share of firms who misreporting is smaller. 49 (denoted as Policy II), then, a large group of patients with medium to high medical demand would see a mild reduction in outpatient visits. Both Policy I and Policy II affect outpatient behaviors, but Patient II imposes a more balanced impact than Policy I. This is shown in Panel B of Table 4, where Policy II has a higher dispersion of outpatient expenses and visits, holding the insurance burden (total reimbursement) constant. These distributional analyses could be of interest to policymakers in various setups where a kinked policy is relevant. 6 Conclusion In this paper, we develop a reduced-form estimator for identifying treatment effects under kink settings when agents manipulate or adjust their values of the assignment variable in response to the non-linear policy. The method is model-free and makes use of agents\u2019 interior response behavior. Specifically, under kinked settings, agents to one side of the cutoff face a change in marginal incentives and adjust their assignment variable by a constant share. Such interior response allows us to recover the counterfactual density and outcome distribution, which facilitates the estimation of treatment effects on bunching agents and shifting agents. Extensions with diffuse bunching, rounding in assignment variable values, potential misreporting/relabelling, optimization frictions, and heterogeneity in structural parameters are also explored. We apply the proposed causal estimator to a medical insurance setting in China where patients are subject to a much higher co-insurance rate when their cumulative annual medical expenses cross a statutory threshold. Based on administrative visit-level outpatient data from a city in China, we show that patients adjust their outpatient behavior in response to the kinked policy, indicating a health and financial tradeoff by patients. 50"
},
{
"url": "http://arxiv.org/abs/2404.14568v1",
"title": "UVMap-ID: A Controllable and Personalized UV Map Generative Model",
"abstract": "Recently, diffusion models have made significant strides in synthesizing\nrealistic 2D human images based on provided text prompts. Building upon this,\nresearchers have extended 2D text-to-image diffusion models into the 3D domain\nfor generating human textures (UV Maps). However, some important problems about\nUV Map Generative models are still not solved, i.e., how to generate\npersonalized texture maps for any given face image, and how to define and\nevaluate the quality of these generated texture maps. To solve the above\nproblems, we introduce a novel method, UVMap-ID, which is a controllable and\npersonalized UV Map generative model. Unlike traditional large-scale training\nmethods in 2D, we propose to fine-tune a pre-trained text-to-image diffusion\nmodel which is integrated with a face fusion module for achieving ID-driven\ncustomized generation. To support the finetuning strategy, we introduce a\nsmall-scale attribute-balanced training dataset, including high-quality\ntextures with labeled text and Face ID. Additionally, we introduce some metrics\nto evaluate the multiple aspects of the textures. Finally, both quantitative\nand qualitative analyses demonstrate the effectiveness of our method in\ncontrollable and personalized UV Map generation. Code is publicly available via\nhttps://github.com/twowwj/UVMap-ID.",
"authors": "Weijie Wang, Jichao Zhang, Chang Liu, Xia Li, Xingqian Xu, Humphrey Shi, Nicu Sebe, Bruno Lepri",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Diffusion AND Model",
"gt": "The development of 3D human models has garnered significant attention in recent years, owing to its versatile applications across various domains, including filmmaking, video games, augmented re- ality/virtual reality (AR/VR), and human-robot interaction. Among the myriad tasks essential for crafting digital humans, texture syn- thesis stands out as a pivotal element in achieving the photorealistic quality of 3D avatars. However, creating 3D textures in the tradi- tional computer graphics pipeline is time-consuming and labor- intensive. Thus, it is important to utilize generation techniques to design diverse texture maps automatically. Texture (UV map) generation has been a focus in previous ap- proaches for tasks such as 3D face and human reconstruction. These methods leverage generators from Generative Adversar- ial Networks (GANs) to estimate textures either in an unsuper- vised [9, 45, 52, 59] or supervised [24, 25] manner. Subsequently, the texture estimation model is integrated into the avatar fitting stage. Nonetheless, these methods are limited in generating novel textures and need more support for controllable generation. Large-scale text-to-image diffusion models [36, 38], nowadays, have been proven very effective over cross-model generation tasks, which should mainly attributed to the scalable 2D image-text data pairs along with large-scale parallel computation. Yet we notice that the lack of large-scale 3D texture data makes training high-quality texture generative models quite challenging. Inspired by the pre- trained strategy of DreamBooth, SMPLitex [5] has employed a few texture maps (UV defined by SMPL [29]) to fine-tune a pretrained arXiv:2404.14568v1 [cs.CV] 22 Apr 2024 Weijie and Jichao and Chang, et al. text-to-image diffusion model. It has been observed that this ap- proach enables the synthesis of texture maps while supporting its foundation text-driven task. However, the inability of SMPLitex to support personalized texture generation poses a significant limi- tation on their approach, particularly in applications where user customization is crucial. Personalized texture generation enables the tailoring of textures to specific individual preferences, fostering a comprehensive experience in 3D applications, including avatars, VR, and gaming. Besides personalization, evaluating the quality of generated textures within the UV space remains an unresolved challenge, leaving more space for research. In this paper, we introduce the UVMap-ID method, a UV map generation model that supports ID-driven personalized generation tasks. Specifically, we fine-tune a pretrained text-to-image diffusion model using a small-scale training dataset. In contrast to 2D person- alized methods [7, 46, 49, 56] that necessitate large-scale training data in 2D methods, our dataset, which is attribute-balanced (i.e., \"Race and Gender\"), comprises around 750 image-ID pairs: the tex- tures map with annotated text prompts, the corresponding portrait faces. To enable the ability of ID-driven personalized generation, we extend the stable diffusion with an additional face fusion module. Moreover, we introduce some corresponding metrics to evaluate the quality of generated textures from multiple aspects, i.e., fidelity, structure preservation, ID preservation, and text-image alignment. Remarkably, our model achieves high-quality and diverse texture synthesis within just several hours of training, while also support- ing controllable and personalized synthesis with the user-provided image ID. In summary, our contributions are as follows: \u2022 We are the first to propose a controllable and personalized UV map generative model capable of synthesizing diverse and per- sonalized texture maps. \u2022 We propose an efficient fine-tuning strategy for training an ID- driven extension architecture for StableDiffusion, utilizing only a small-scale training dataset. \u2022 We utilize our method to produce a new dataset, containing around 5k UVMap-ID image pairs, and will make it publicly available. Our small-scale attribute-balanced training dataset, the larger-scale dataset, and metrics for textures play a bridging role in guiding subsequent work in this field.",
"main_content": "UV-Map Generative Model. This model aims to generate diverse textures based on the generative models, such as Generative Adversarial Networks [10], Diffusion Models [13, 43]. Existing works utilize this technique in the 3D face reconstruction with the 3D morphable model (3DMM) [3] or human reconstruction with the SMPL [29]. For face texture generation, GANFIT [9] first uses 10,000 high-resolution textures to train the GAN generator, then takes this GAN generator as the statistical parametric representation of the facial texture in the fitting progress. To avoid the training using the limited numbers and diversity of texture map, StyleUV [25] integrates the 2D image fitting and rendering stages into the adversarial networks. Additionally, some methods focus on contributing the 3D facial UV-texture datasets, such as Facescape [55], and FFHQ-UV [1]. For human texture generation, most of the works learn to recover the full texture from a single human image. The Re-Identification metric as supervised in this task is proposed [45]. To further improve the quality of texture generation, Zhao. et al [59] introduce a consistency learning to enforce the cross-view consistency of texture prediction during training. Texformer [52] introduces the transformer architecture to exploit global information of the input, effectively facilitating higher-quality texture generation. Different from these methods without using any ground-truth 3D textures, Verica. et al [24] non-rigidly registers the SMPL model to thousands of 3D scans, and encoders the appearances as texture maps. And theses 3D textures are used to train a texture completed model. However, these mentioned methods cannot support diverse and text-guided texture generation. The most related work to ours is SMPLitex [5]. Motivated by the Dreambooth [37], SMPLitex utilizes a few texture maps to fine-tune the pretrained text-guided diffusion model to enable the textures inpainting and text-guided texture generation task. Compared to SMPLitex, our method supports both text-guided and ID-driven personalized texture generation. Text-to-3D Avatar Generation. Text-guided 3D content generation has achieved great success with the development of 3D representation methods and generative models. Lots of methods utilize the frozen image-text joint embedding models from CLIP [33] to optimize the underlined 3D representation, such as NeRF [30] where some of them work on generation for general 3D object [18, 31, 40, 50, 54], or human Avatar [14, 16]. The most famous work is Dream Fields [18] which first demonstrated the effectiveness of combining the CLIP model and NeRF representation for 3D object creation, but 3D objects produced by this approach tend to lack realism and accuracy. DreamFusion [32] introduces Score Distillation Sampling (SDS) loss which is based on probability density distillation that enables the use of a pretrained 2D diffusion model as a prior for optimization of a parametric NeRF representation. By using SDS loss instead of CLIP, DreamFusion generates high-quality coherent 3D objects while aligning with the given text prompt. Recently, many similar methods with SDS loss have occurred to improve text-to-3D results in various aspects, such as enhancing the realism of rendering with detailed geometry [6], solving the multiple-view inconsistency problem [27, 42] or using variational score distillation (VSD) [47] method instead of SDS to improve the fidelity and diversity of 3D content generation. However, highquality human avatars remain a challenge due to the complexity of the human body\u2019s shape, pose, and appearance. To make the avatar animatitable, DreamAvatar [4] and AvatarCraft [19] integrate the SMPL prior into the NeRF or SDF representation with a deformable field. To improve the avatar\u2019s quality and avoid the cartoon-like appearance, DreamHuman [23] uses a spherical harmonics lighting model instead of diffuse reflectance model and additionally optimizes a spherical harmonics coefficients; HumanNorm [17] introduces a normal diffusion model to enhances the diffusion model\u2019s understanding of 3D geometry to further improve the texture and geometry\u2019s quality. More recently, HumanGaussian [28] integrates 3D Gaussian representation instead of NeRF into 3D Human Avatar generation to reduce training time. Compared with these text-to-3D works, we focus on achieving a controllable texture generation but don\u2019t care about the generation of geometry. UVMap-ID: A Controllable and Personalized UV Map Generative Model Text-Driven Personalized Diffusion Models. Diffusion model [13, 43], is a class of generative modeling in which it iteratively transforms noises to samples simulating the true data distribution. Diffusion models generally outperformed other traditional methods, such as GANs, due to the fact that the output quality has been notably improved across diverse domains. Diffusion models are widely used for text-to-image generation [34, 36, 38], and also stand out supporting more cross-model tasks [2, 35, 53]. One of the foundation works, Stable diffusion [36], applies the diffusion process on latent space, reducing training computation while preserving quality. While other methods, such as Imagen [38] and DALL-E2 [34], generate samples directed over pixel space, have also proven effective. Finetune-wise, DreamBooth [37] and LoRA [15] introduces a subject-driven training approach, enabling text controls, and offers a compelling feature for precise personalizing. Text Inversion [8] and VideoBooth [20] suggest an alternative solution via latent inversion before editing. Another class of methods [7, 46, 48, 49, 51, 56\u201358, 60] extends the model with additional networks to extract and adopt conditional inputs that guide the generation. Representatively, IP-Adapter [56] introduces a decoupled U-Net that injects conditional hidden features to the original diffusion U-Net, achieving an accurate control from the reference input. Some concurrent 2D methods such as Instant-ID [46], Infinite-ID [49] and SSR-Encoder [58], also attracted lots of attention. In this work, we share goals similar to IP-Adapter and Instant-ID, focusing on 3D human texture rather than 2D generation. 3 METHODS Given a reference portrait describing the facial appearance (Face ID) of the target individual, our model aims to generate a texture that aligns with the facial appearance of the target person and fits the structure of the UV map defined by SMPL. In this section, we first provide a brief introduction to Denoising Diffusion Probabilistic Models [13] in Section 3.1, laying the foundational framework and network architecture for our method. Subsequently, detailed explanations of design specifics are presented in Section 3.2. Then, we will explain the pipeline we use to build the dataset in Section 3.3. Finally, we introduce some metrics for UV textures in Section 3.4. 3.1 Preliminary: Denoising Diffusion Probabilistic Models The denoising diffusion probabilistic models operate by simulating a forward process that adds noise to an image or its latent representation over a series of time steps, transforming them into Gaussian noise. Conversely, the reverse process seeks to recover the original image or latent representation by iterative denoising. This bidirectional process is key to the diffusion models\u2019 ability to generate high-fidelity images. Our work leverages Stable Diffusion (SD), a pertrained generative model that could generate high-quality images from a text prompt. Specifically, given an image \ud835\udc65, SD first uses a pretrained autoencoder to encode \ud835\udc65into latent: \ud835\udc67= E(\ud835\udc65). Then, noise is gradually added to \ud835\udc67over a sequence of \ud835\udc47steps, transitioning the data distribution from the original data distribution to a Gaussian Noise distribution, and the noise added forward a Markov chain of conditional Gaussian distributions defines the process: \ud835\udc5e(\ud835\udc67\ud835\udc61|\ud835\udc67\ud835\udc61\u22121) = N (\ud835\udc67\ud835\udc61; \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc67\ud835\udc61\u22121, \ud835\udefd\ud835\udc61\ud835\udc3c), where \ud835\udefd\ud835\udc61is the variance schedule. During training, the denoising u-net \ud835\udf16\ud835\udf03of SD aims to learn to reconstruct the original latent \ud835\udc67 from the noise, modeled by: \ud835\udc5d\ud835\udf03(\ud835\udc67\ud835\udc61\u22121|\ud835\udc67\ud835\udc61) = N (\ud835\udc67\ud835\udc61\u22121; \ud835\udf07\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61), \ud835\udf0e2 \ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61)I), and the learning objective is defined as follows: \ud835\udc3f(\ud835\udf03) = E\ud835\udc67\ud835\udc61,\ud835\udc50,\ud835\udf16,\ud835\udc61 \u0002 ||\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc50,\ud835\udc61)||2\u0003 , where \ud835\udc50represents text conditional embeddings. 3.2 Fine-Tuning Text-to-Image Models for ID-Driven UV Map Generation Fig. 2 provides the pipeline of our proposed approach. The initial input to the pipeline consists of random noise and a reference portrait. Our text-to-image model is configured based on the design of SD, employing the same framework and trained weights of SD. Motivated by DreamBooth [37], we propose to utilize the finetuning strategy with a prior preservation loss (Fig. 2 (Left)) applying to text-to-image diffusion architecture integrating with a face fusion module (Fig. 2 (Right)). 3.2.1 Face Fusion Module. To enable Stable Diffusion to accept additional image information, (i.e., the portraits), the previous methods mainly leverages the CLIP image encoder, either directly substituting the CLIP text encoder or through decoupled cross-attention mechanism to separate cross-attention layers for text features and image features [34, 56]. Nevertheless, the CLIP image encoder is constrained by its operation on images of lower resolution, which particularly impacts its efficacy in encoding face images by failing to encapsulate comprehensive details. Moreover, CLIP\u2019s architecture, fundamentally designed to align semantic features between text and images, mainly focuses on high-level feature correspondence. This orientation towards semantic feature matching inadvertently results in a dilution of finer, detailed features during the encoding process, posing a challenge for applications requiring precise detail retention. Hence, we propose to use the face embedding extracted by the face recognition models and linear projection layers to provide SD with human face information. Also, to preserve the original model\u2019s ability to process text information while integrating image information, we adopt the decoupled cross-attention mechanism [56], ensuring a seamless blend of both modalities. Given query feature \ud835\udc4d, image feature \ud835\udc50\ud835\udc56and the text feature \ud835\udc50\ud835\udc61, the output \ud835\udc4d\u2032 of decoupled cross-attention layers is: Z\u2032 = softmax(\ud835\udc44\ud835\udc3e\ud835\udc47 \u221a\ufe01 \ud835\udc51\ud835\udc58 )\ud835\udc49+ softmax(\ud835\udc44(\ud835\udc3e\u2032)\ud835\udc47 \u221a\ufe01 \ud835\udc51\ud835\udc58 )\ud835\udc49\u2032, where \ud835\udc44= \ud835\udc4d\ud835\udc4a\ud835\udc5e, \ud835\udc3e= \ud835\udc50\ud835\udc61\ud835\udc4a\ud835\udc58, \ud835\udc49= \ud835\udc50\ud835\udc61\ud835\udc4a\ud835\udc63, \ud835\udc3e\u2032 = \ud835\udc50\ud835\udc61\ud835\udc4a\u2032 \ud835\udc58, \ud835\udc49\u2032 = \ud835\udc50\ud835\udc61\ud835\udc4a\u2032 \ud835\udc63, and the \ud835\udc4a\ud835\udc5e, \ud835\udc4a\ud835\udc58, \ud835\udc4a\ud835\udc63, \ud835\udc4a\u2032 \ud835\udc58and \ud835\udc4a\u2032 \ud835\udc63are learnable parameters of the projection layers. Similar fusion modules have been utilized in some concurrent 2D methods [46, 49]. 3.2.2 Prior Preservation Loss. We observed that when using \u201cUV texture map\" as the text prompt, SD often fails to generate any correct UV maps. This is likely because SD is trained on data scraped from the internet, where real UV texture maps are rarely found in the training resources. Also, our goal is to generate images with a Weijie and Jichao and Chang, et al. Text Embedding Face Embedding Prior Preservation Loss \"A [S] Texturemap of [P]\" \"A Texturemap\" \"A Texturemap\" Reconstruction Loss Face ID Face ID \"A [S] Texturemap of [P]\" Noise Noise Text-to-Image Noise Decoder U-Net Cross Attention Face Projection UV Map Generation Face Recognition Text Encoder Noise Text-to-Image Text-to-Image Face\u00a0Fusion\u00a0Module Figure 2: The left side of the figure shows the overview of our proposed pipeline. Given a reference image as face ID, we utilize a pre-trained text-to-image diffusion model, where the input is a combination of a noised UV Map and text prompt of a unique identifier and characteristics of the portrait where \"A [S] Texturemap of [P],\" where [S] is a unique identifier and [P] represents the race and gender. To maintain the quality of images generated by the pre-trained model and effectively process textual features, we adopt a prior preservation loss. The right side of the figure shows the detailed architecture of our model, where facial information is mapped to the same dimensions as text embeddings through a facial recognition model and face projection layers. Subsequently, we merge facial and textual information via decoupled cross-attention, which is then integrated into the pre-trained text-to-image model. small training set (about 750 images in our dataset), each featuring different facial characteristics of individuals, and generating accurate faces has always been a weakness of SD. Additionally, our input incorporates extra face image information, and during fine-tuning, we would like to ensure our model does not lose SD\u2019s original capability to correctly process textual information. To this end, we introduced prior preservation loss, as proposed in Dreambooth [37], to ensure the model retains its generalization ability and does not overfit the few-shot examples provided during the personalization process. However, our objectives differ fundamentally from Dreambooth in two ways. Firstly, Dreambooth targets subject-driven generation, whereas our model aims at generating specific formats of images, the UV texture maps. This leads to a situation where Dreambooth requires re-fine-tuning the entire SD for each subject, while our model, after training, can generate corresponding UV maps for any input face ID. This distinction arises because, in DreamBooth, one unique identifier represents a single unique subject, whereas our unique identifier [S] denotes one unique kind of image structure (UV Map defined by SMPL). Secondly, we added extra facial information [P] to our text prompts during training to further preserve the original capabilities of the text encoder, enabling it to effectively parse attributes such as race and gender. For detailed experiments, please refer to Section 4.4 Formally, the training loss of our model is defined as: \ud835\udc3f(\ud835\udf03) = E\ud835\udc67\ud835\udc61,\ud835\udc50,\ud835\udf16,\ud835\udc61 \u0002 ||\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc50,\ud835\udc61)||2\u0003 + E\ud835\udc67\ud835\udc61,\ud835\udc50\u2032,\ud835\udf16,\ud835\udc61 \u0002 ||\ud835\udf16pr \u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc50\u2032,\ud835\udc61)||2\u0003 , where \ud835\udc50\u2032 is a fixed conditional text prompt \u201ca texturemap\u201d and \ud835\udf16pr is the generate data using the frozen diffusion model with \ud835\udc50\u2032. 3.3 Dataset Training Dataset In this part, we describe the process of constructing our dataset, which is centered around the generation of high-quality and diverse UV texture maps for digital human models. Our approach can be segmented into three stages: 1) Celebrity Selection: In the initial phase of our dataset creation, we aimed for a balanced and inclusive representation by employing OpenAI\u2019s ChatGPT to generate a list of 150 celebrities. Our selection was structured to include equal representation across three ethnic groups: African American, Asian, and White, with 50 celebrities from each group. To further enhance the diversity and applicability of our dataset, we ensured gender balance within each ethnic category, selecting 25 male and 25 female celebrities. We use celebrities because SMPLitex accepts only text input, and celebrity portraits are readily available. This approach allows us to link names, portraits, and corresponding UV texture maps effectively. 2) UV Texture Map Generation: We employed SMPLitex to generate UV texture maps for each of the selected celebrities. This process resulted in 50 UV texture maps per celebrity, totaling 7,500 initial texture maps. 3) Manual Selection: To ensure the highest quality and relevance for our dataset, we manually reviewed the generated UV texture maps and selected 5 maps per celebrity that best met our predefined criteria. These criteria included clarity, detail accuracy, and representation quality of ethnic features. This manual selection process narrowed our dataset to 750 UV texture maps with 5 UV texture maps per ID. A New Dataset: CelebA-HQ-UV We utilize our method with personalized generation to produce a new dataset, which contains 5k UVMap-ID pairs. Specifically, we select 5000 high-resolution face images from CelebA-HQ [21] as reference image IDs of our methods. UVMap-ID: A Controllable and Personalized UV Map Generative Model Face ID \"military soldier costume\" \"wearing white top, blue pants, sunglasses\" \"Superhero custume\" Figure 3: Personalized textures generation results using face IDs from CelebA-HQ dataset. For every ID, our method produces 10 textures and selects 2 by the evaluation of multiple aspects, i.e., the quality of textures, the preservation of UV structure, and the preservation of face ID. Fig. 3 shows some results using three face IDs from CelebA-HQ. We refer to this dataset as CelebA-HQ-UV, and will make it publicly available. Note that we define a list of text prompts for these generations which will be introduced in the supplementary material. 3.4 Metrics As previously mentioned, assessing the quality of generated textures within the UV space defined by SMPL poses a significant challenge, especially within the scope of our personalized generation task. In this paper, we introduced four metrics to evaluate the quality of the generated textures from multiple aspects: Inception Scores [39] to evaluate the fidelity and diversity, Semantic Structure Preservation (SSP) to evaluate structure preservation of UV space defined by SMPL [29], Deep Face Recognition (DFR) to evaluate Face ID preservation and CLIP-Text (CLIPT) [20, 48] score to evaluate the text-image alignment. Inception Score (IS) on UV textures and rendered results The Inception Score (IS) and Fr\u00e9chet Inception distance [12] are widely utilized metrics for evaluating the diversity and quality of 2D images generated by generative models. FID is a well-established measure that compares the inception similarity score between distributions of generated and real images. One key distinction between IS and FID is that IS is computed solely using fake samples, eliminating the need for real samples in its calculation. Due to the lack of real sample distribution, we employ the IS to directly evaluate the quality of 5000 generated textures rather than FID. We refer to IS on textures of UV space as IS (UV). Additionally, we render these textures into 2D space by applying them to the SMPL Mesh. Subsequently, we utilize IS to evaluate the quality of 5000 rendered human images in 2D space. We refer to this type of IS as IS (R). Semantic Structure Preservation (SSP) To assess the preservation of UV structures in generated textures, we introduce a novel UV Structure Texture s Output Groundtruth Figure 4: It shows UV structures, textures from SMPLitex, extracted semantic segmentation, and semantic groundtruth from left to right. metric termed Semantic Structure Preservation (SSP). Notably, we have observed instances where the generated textures from SMPLitex [5] may not faithfully retain these underlying structures, as illustrated in Fig. 4. The SSP metric is designed to quantify this preservation. We leverage off-the-shelf human parsing techniques [26] to extract semantic segmentation from the generated images and then compare it with ground truth segmentation (Fig. 4 (right)). We conduct this comparison across a dataset comprising 1000 images and compute the mean difference as the SSP score. Deep Face Recognition (DFR) To assess the preservation of identity (ID) within textures, a crucial aspect of personalized image generation tasks, we propose employing Deep Face Recognition (DFR) methods to quantify the similarity between generated textures and reference facial images. Specifically, we leverage the off-the-shelf tool [41] to do face recognition between the textures and image ID. We use 10 face IDs, and 100 samples for every ID and report the successful numbers. We refer to this metric as the DFR score which is reported as a measure of the preservation of identity within the generated textures. CLIP-Text (CLIPT) To measure the alignment of the generated textures and given text prompts, we use the CLIP-Text (CLIPT) score followed by 2D methods [20, 48]. This metric is calculated using the cosine similarity of the CLIP text embeddings of the given text prompts and CLIP image embeddings of the generated textures. We compute the CLIPT score using 1000 text-prompt pairs. 4 EXPERIMENTS 4.1 Training Details Our experiments are based on the Realistic_Vision_V4 model, which is further fine-tuned on Stable Diffusion v_1.5 [36], and could produce more photorealistic images. Additionally, we utilize the buffalo_l pre-trained face recognition model from SCRFD [11], and pre-trained projection layers from [56]. The experimental code is developed using the HuggingFace Diffusers library [44]. During training, we fine-tune the entire U-Net, text encoder and face projection layers, and keep the VAE encoder and decoder of Stable Diffusion frozen. The UVMap-ID training is conducted on a single machine equipped with an A40 GPU for 1500 steps, with a batch size of 2. We employ the AdamW optimizer [22] with a fixed learning rate of 1e-6 and a weight decay of 0.01. Our dataset comprises images with a resolution of 512 \u00d7 512, hence we generate images at this resolution during training. In the inference phase, we use a 50-step DDIM sampler [43] and set the classifier-free guidance scale to 7.5. Weijie and Jichao and Chang, et al. Asian woman Asian man Asian man White woman Asian woman Asian woman Asian woman Asian man White man Face ID \"wearing yellow clothes\" \"wearing sunglasses\" \"wearing white shirt and jeans\" \"military soldier costume\" \"santa claus costume\" Figure 5: Our personalized generation results. The 1st column shows reference faces, obtained from the website, and not existing in our training set. UVMap-ID: A Controllable and Personalized UV Map Generative Model \"wearing white shirt, jeans, yellow hat\" \"wearing white top, green pants, glasses\" SMPLitex Ours Input \"Betty Sun\" SMPLitex Ours \"Crystal Liu\" SMPLitex Ours \"YuanYuan Gao\" \"Jay Chou\" \"Donnie Yen Chi-Tan\" \"Aaron Kwok Fu-shing\" \"wearing white shirt, jeans\" Input \"wearing military solider costume\u00a0\" Figure 6: Comparsion with SMPLitex [5] results. SMPLitex is not an image ID-driven method. Thus, we provided these celebrities\u2019 names in the test prompts for SMPLitex, but not for ours. Taking \"Betty Sun\" as an example (upper-left corner), the test prompt of SMPLitex is \"a texturemap of Betty Sun wearing...\", and our test prompt is \"a texturemap of Asian woman wearing...\". Note that image IDs are not existing in our training data. 4.2 Baselines We take the texture generation model SMPLitex [5] as the baseline. And all results from SMPLitex are produced from their released code and pretrained model. SMPLitex does not support image-driven personalized generation. Thus, we provide image ID\u2019s name in the text prompts for SMPLitex, but not for our method. Methods IS (R) \u2191 IS (UV) \u2191 SSP \u2193 CLIPT \u2191 DFR \u2191 SMPLitex [5] 1.46 \u00b1 0.020 1.95 \u00b1 0.049 10.45 29.40 62 UVMap-ID 1.78 \u00b1 0.020 1.89 \u00b1 0.027 8.46 29.12 792 Table 1: Quantitative results using four metrics: inception scores on rendered images (IS (R)), inception scores on UV maps (IS (UV)), Semantic Structure Preservation (SSP), CLIP Text (CLIPT), Deep Face Recognition (DFR). Weijie and Jichao and Chang, et al. Methods DFR \u2191 UVMap-ID \ud835\udc64/\ud835\udc5c\"Race and Gender\" 436 UVMap-ID \ud835\udc64/ \"Race and Gender\" 792 Table 2: Ablation Study for \"Race and Gender\" label. Methods IS (R) \u2191 IS (UV) \u2191 SSP \u2193 CLIPT \u2191 DFR \u2191 UVMap-ID (1) 1.88 \u00b1 0.028 2.03 \u00b1 0.039 10.59 29.09 734 UVMap-ID (2) 1.78 \u00b1 0.020 1.89 \u00b1 0.027 8.46 29.12 792 UVMap-ID (5) 1.55 \u00b1 0.017 1.55 \u00b1 0.084 8.74 29.27 798 Table 3: Ablation studies of Training data. UVMap-ID (\ud835\udc41) denotes the number (\ud835\udc41) of textures for each ID in the training stage. 4.3 Comparisons Fig. 5 shows diverse personalized texture generation results from our methods. Our reference face IDs (1st column images) are collected from a diverse range of sources on the website, thus encompassing a wide variety of characteristics, including different ethnicities, genders, occupations, levels of fame, and even facial poses. As shown in the 2nd-6th columns of Fig. 5, our generated UV textures effectively preserve the identity features of these reference face IDs, demonstrating the effectiveness and robustness of our methods in personalized generation. Moreover, our method also achieves accurate text-driven controllable generation. We conducted visualization comparisons with SMPLitex [5], as depicted in Fig. 6. Notably, SMPLitex is not an image-driven method. Therefore, while we utilized some well-known celebrities as image IDs and provided their names in text prompts for SMPLitex, we deliberately omitted this information for our method to ensure a fairer comparison. Remarkably, our results exhibit a higher degree of similarity in face ID preservation compared to SMPLitex, underscoring the superiority of our method in maintaining identity features during personalized texture generation. Moreover, our approach also demonstrates superior structural preservation compared to SMPLitex, as evidenced by the \"Jay Chou\" row (Top-Right). Quantitative results using four metrics are shown in Table 1. We observe that SMPLitex achieves better IS (UV) scores than our method. We attribute this to the fact that our approach is imagedriven, which means that the provided reference ID constrains the diversity of generated images, a crucial aspect of IS. In contrast, our method achieves a higher IS (R) than SMPLitex. As mentioned, SMPLitex often struggles to preserve UV structures effectively, resulting in unrealistic renderings. The comparison of structure preservation can be validated by our achieved superior SSP score. Moreover, our DFR score significantly outperforms the Baseline, validating that our method achieves better similarity to the target ID in personalized texture generation tasks. Additionally, the high success rate of 837 out of 1000 demonstrates the robustness of our method to reference images. Furthermore, we observe that our CLIPT score is comparable to the baseline, indicating that the \"image prompt\" generated by our image encoder does not significantly affect the control capability of the text prompt. Face ID w/o \"Race and Gender\" w/ \"Race and Gender\" Figure 7: Qualitative ablation studies of between \ud835\udc64/\ud835\udc5cand \ud835\udc64/ \"Race and Gender\" labels. The 1st-row results show our full method preserves the \"Gender\" attribute and the 2nd-row results show our full method preserves the \"Race\" attribute. 4.4 Ablation Studies \"Race and Gender\" in prompts As shown in Fig. 7, we analyze the impact of including race and gender labels in prompts during training, assessing how this additional information affects generative model performance. As indicated in Table 2, incorporating race and gender labels significantly enhances the model\u2019s DFR score compared to the version without these labels (UVMap-ID\ud835\udc64/\ud835\udc5c\"Race and Gender\"). This indicates that the facial recognition model we use focuses more on the structural information of the human face, while the label supplements the missing information such as skin color. Training Data In this part, we explore the impact of varying the number of UV maps used per image ID during training. Our model, UVMap-ID, is evaluated using a consistent training strategy, except that each image ID in the training dataset is processed using 1, 2, or 5 UV maps. These setups are denoted as UVMap-ID (1), UVMap-ID (2), and UVMap-ID (5) respectively. Table 3 highlights the performance metrics across these configurations. Based on the results shown in Table 3, we have chosen UVMap-ID (2) as our base model. This configuration utilizes two UV maps, which provide a diverse dataset sufficient to capture the critical variations in facial features, without overloading the pre-trained model. UVMap-ID (2) strikes a balance, delivering remarkable realism in image generation while effectively maintaining the identity of reference images. 5 CONCLUSIONS In this paper, we introduce UVMap-ID, the first method for IDdriven personalized texture generation. UVMap-ID takes the StableDiffusion as the backbone and extends it with an additional face fusion module. Moreover, our method is a highly efficient model with only several hours fine-tuning strategy on a small-scale dataset. Additionally, we also explore the evaluation of quality for UV textures and introduce some corresponding metrics. Finally, with user UVMap-ID: A Controllable and Personalized UV Map Generative Model provided face images, our method can automatically create highquality UV textures with the preservation of face ID while enabling text-driven controls, which is a very available application for 3D avatar creation in compute graphics fields. By using our method, we create a new dataset, CelebA-HQ-UV, comprising textures and face ID pairs. This dataset will be shared with the community to facilitate further research. We desire to explore the interactive editing of textures in the future."
},
{
"url": "http://arxiv.org/abs/2404.05212v2",
"title": "DiffCJK: Conditional Diffusion Model for High-Quality and Wide-coverage CJK Character Generation",
"abstract": "Chinese, Japanese, and Korean (CJK), with a vast number of native speakers,\nhave profound influence on society and culture. The typesetting of CJK\nlanguages carries a wide range of requirements due to the complexity of their\nscripts and unique literary traditions. A critical aspect of this typesetting\nprocess is that CJK fonts need to provide a set of consistent-looking glyphs\nfor approximately one hundred thousand characters. However, creating such a\nfont is inherently labor-intensive and expensive, which significantly hampers\nthe development of new CJK fonts for typesetting, historical, aesthetic, or\nartistic purposes. To bridge this gap, we are motivated by recent advancements\nin diffusion-based generative models and propose a novel diffusion method for\ngenerating glyphs in a targeted style from a single conditioned, standard glyph\nform. Our experiments show that our method is capable of generating fonts of\nboth printed and hand-written styles, the latter of which presents a greater\nchallenge. Moreover, our approach shows remarkable zero-shot generalization\ncapabilities for non-CJK but Chinese-inspired scripts. We also show our method\nfacilitates smooth style interpolation and generates bitmap images suitable for\nvectorization, which is crucial in the font creation process. In summary, our\nproposed method opens the door to high-quality, generative model-assisted font\ncreation for CJK characters, for both typesetting and artistic endeavors.",
"authors": "Yingtao Tian",
"published": "2024-04-08",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Diffusion AND Model",
"gt": "The importance of East Asian languages that use Chinese characters, including Chinese, Japanese and Korean (CJK) languages, is profound. For example, they are used by more than 1.56 billion speakers (Ethnologue Authors 2024) which account for more than 25% of global GDP (IMF 2023). Also, historically, a wide range of scripts have been heav- ily influenced by Chinese characters, adding more histor- ical and culture value. In the modern era, with the pro- liferation of printing and information technology, writing scripts need to address the challenges typesetting: for ex- ample, the requirements on the visual appearance of char- acters means that a font must provide well-designed and consistent-looking glyphs across the glyph repertoire. However, the unique aspect for CJK typesetting is the sheer number of characters that makes such requirement demande more effort: CJK contains nearly one-hundred Figure 1: Our method generates highly stylized and legiti- mate CJK glyphs, For each character, our method refers to a standard font\u2019s bitmap (visualized in gray) and generates a diverse array of glyphs in various printed and calligraphy form (More details in Figure 8.) Our method is effective for both common (left) and extremely rare (right) CJK charac- ters. The zoom-ins showcase examples of printed and cal- ligraphy form, highlighting the method\u2019s high quality and its utility for font designers and artists alike. thousand unique characters in the latest Unicode 15.1 (Uni- code 2023). Furthermore, there is also an emphasis on typefaces categories with different styles1, as they carry practical, cultural, and artistical meaning (Huang 2020; Kim 2020a; Kim 2020b). Consequently, the font industry faces a dilemma: either making one style instance with as wide of coverage as possible, or making fonts with wider 1Here \u201cstyle\u201d is in a general sense rather than its specific mean- ing in font development as in \u201cstyle instance\u201d. arXiv:2404.05212v2 [cs.CV] 25 Apr 2024 style support but with limited coverage. Despite these com- promises, font creation remains an laborious and expensive task. Therefore, it remains a critical challenge to efficiently producing fonts that encompass a diverse range of various, multi-style yet legitimate CJK glyphs for the entire set of nearly one hundred thousand characters. To address these challenges, we propose a diffusion- based method (Figure 1) capable of generating characters in diverse styles conditioned on a single standard reference glyph. The reference glyph only needs to outline the charac- ter\u2019s shape, obviating the need for intricate design work. As a diffusion model, it consists of two processes: a diffusion process that gradually adds noise to destroy the data, and a reverse process that generates new samples from scratch by progressively removing noise. A deep neural network is trained to estimate the noise, conditioned on the refer- ence character. Unlike common text-to-image models which must memorize the shapes of characters and thus suffering from limited data of rare characters, our method works uni- formly well for all CJK glyphs, from common to rare ones, as long as we have an available reference. Furthermore, by injecting style information in the temporal embedding for diffusion, our method can not only generate characters in various styles but also interpolate between these styles. Experiments show that our method can generating legiti- mate CJK characters across a broad spectrum of styles. This includes typefaces for physical and digital typesetting, as well as artistic calligraphy styles. The later poses a greater challenge due to more limited data and significant stylistic deviations from reference glyphs. Moreover, our method achieves zero-shot generalization to CJK-inspired scripts not encountered in training (such as the under-resourced Cho Num and Tangut scripts) and can meaningfully interpolate between styles. These capabilities enable font designers to produce fonts on a large scale, even with limited re- sources. Additionally, our analysis highlights the efficiency of our method and its potential for vectorization, indicat- ing its practicality for adoption and adaptation in font design workflows.",
"main_content": "Chinese characters constitute a logographic writing system, circa 13th century BCE to present, utilized across multiple languages within the Sinosphere, or the East Asian cultural sphere (Marginson 2011). Initially developed for Chinese, it has also been adopted for Japanese, Korean, Vietnamese, and has significantly influenced lesser-known, historical scripts like the Tangut and Khitan small scripts. Like for any writing system, both physical (Needham 1974) and digital (Liu 2022) typesetting has a profound impact in East Asian culture. In such typesetting practices, alongside how characters are arranged, an equally important aspect is what characters look like (Chiba and others 2020; Tung and others 2023; Lim and others 2020) which necessitates glyphs that are legitimate, consistent, and available for the vast number of characters. This requirement is deeply rooted in pre-typesetting traditions, particularly Oracle bone script (\u7532\u9aa8\u6587) Seal script (\u7bc6\u66f8) Clerical script (\u96b8\u66f8) Semicursive script (\u884c\u66f8) Cursive script (\u8349\u66f8) Regular script (\u6977\u66f8) Figure 2: Example of hand-written Chinese script styles for \u201c\u99ac\u201d (horse) (Wikipedia contributors 2024). \u8108\u8108\u5ee3\u5ddd\u6d41 \u8108\u8108\u5ee3\u5ddd\u6d41 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b77\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 3: Example of CJK typefaces (above) and we \u8108\u8108\u5ee3\u5ddd\u6d41 \u9a45\u99ac\u6b74\u9577\u6d32 Figure 3: Example of CJK typefaces (above) and weights (below). Typefaces: (1) Ming a.k.a. Song, (2) Gothic typefaces, (3) Regular script, (4) Semi-cursive and (5) cursive script. Widths: Noto Serif CJK of different width: ExtraLight, Light, Normal, SemiBold and Black (i.e. Bold.) the established script styles for handwriting. As shown in Figure 2, there are five major script styles \u2014 seal, clerical, semi-cursive, cursive and regular style \u2014 traditionally used to write Chinese characters, in additional to oracle bone script, the oldest known form of Chinese characters. These styles naturally transitioned into typesetting (Lisa Huang 2020) practice which will be elaborated below. In the context of typesetting, the term CJK characters is used to collectively describe the glyphs for Chinese, Japanese, and Korean languages, all of which incorporate Chinese characters into their writing systems. Technically, CJK also encompasses derivatives of Chinese characters, such as the Kana scripts (Hiragana and Katakana) used in Japanese, although these are beyond the scope of this paper. The importance of glyphs in typesetting is multifaceted, with style (typefaces) and weight (Stocks 2020a; Stocks 2020b) being two crucial aspects. In Latin typesetting, wellrecognized styles include the serif (\u201cbrown fox\u201d) and sansserif (\u201cbrown fox\u201d), while weight variations are exemplified by bold (\u201cbrown fox\u201d) and italics (\u201cbrown fox\u201d). CJK typesetting is not an exception, and even encompasses more delicacy regarding this matter: it requires a consistent appearance across a vast number of characters combined with unique cultural practices. Also, CJK typesetting places a significant emphasis on typeface categories, which carry practical, cultural, and artistic significance (Huang 2020; Kim 2020a; Kim 2020b). Notably, styles in CJK typesetting, while similar, do not entirely align with Latin typesetting practices. They are better shown by several key styles unique to CJK typography in Figure 3: (1) Ming/Song, a printed form style that has evolved from the regular script over centuries, is the most widely used and is akin the serif style in Latin typography. (2) Gothic typefaces, printed form styles that convey a sense of modernity and are akin to sans serif styles in Latin typography. (3) Regular script, a printed form style that mimics handwritten forms, is primarily used for educational purposes and also serves a role similar to italics in Latin typography, although italics per se do not exist in CJK typesetting. (4) Calligraphy form, such as semi-cursive and cursive script, that are rarely used in standard typesetting but are favored for artistic expression. Regarding font width, there exists a broad spectrum from light to black (which is akin to bold in Latin typography). However, adjusting font weight in CJK characters involves more than merely altering stroke width: The exact boldness of each stroke must be carefully tuned to ensure visual consistency across characters with varying stroke counts, thus requiring more effort. For the sake of completeness, we also want to emphasize the regional difference of CJK characters (Whistler 2023). These regional variations in typesetting are handled by both font design and encoding. Another point of distinction is the use of simplified versus traditional characters, which are represented by different code points. However, these aspects, while important, fall outside the scope of this paper and and we leave them for further study. Computation and Creative Approach to CJK Glyphs The advances of generative models have led to image generation models capable of producing high quality images comparable to professional photography and artwork. Noteworthy examples of such models that are public-available and state-of-the-art include Stable Diffusion XL (Podell et al. 2023) and MidJourney v6 (MidJourney Authors 2024). The former is open-source while the later is offered as a commercial product. Given their powerful capabilities, it is natural to apply these tools for producing CJK glyphs. However, generating valid CJK characters turns out to remain a challenging task even for these advanced models. As illustrated in Figure 4, these state-of-the-art image generation models fail to create legitimate CJK characters despite their ability to produce valid scenes and fine-grain details. For text-to-image models, semi-supervised training is important for generating high-fidelity images (Zhou et al. 2023). However, we argue that the fundamental issue with the inability of state-of-the-art model to produce legitimate Figure 4: Example of generating CJK characters using stateof-the-art text-to-image models. From left to right: Stable Diffusion XL (Podell et al. 2023), MidJourney v6 (MidJourney Authors 2024), and zoomed-in views are on the lower row. While these models are powerful and expressive, the generated characters are illegitimate to native speakers. Figure 5: Distribution of CJK characters in Classical Chinese, Modern Chinese and Modern Japanese. Besides frequent characters, most of the characters are underrepresented in the text data. This plot uses data aggregated from corpus statistics (Da 2010; NINJAL 2015). CJK characters lies primarily in the very distribution of characters. Specifically, the challenges are two fold, involving both the amount and complexity of CJK characters: Firstly, regarding number of characters, the latest version of Unicode, Unicode 15.1, encodes a total of 97,680 characters. Secondly, a Figure 5 show, the frequency distribution of characters, which follows power-law, suggests that only a small subset of characters appears frequently enough to enable effective learning by any model. As a result, a vast number of CJK characters remains underrepresented, introducing difficulty for any model dealing with them. Another line of work in generating CJK characters involves transferring from a standardized glyph. Specifically, it has been proposed to generate the glyph in the desired style based on a standard glyph, usually using a reference font that covers CJK code points comprehensively. This approach becomes promising since only a small number \u2026 diffusion process (destroying data) reverse process (generating sample) one step \u2026 Figure 6: The diffusion model consists of two processes: (1) the diffusion process that gradually adds noise to destroy the data, and (2) the reverse process that gradually removes the noise through a Markov process (sampling from p\u03b8 (xt\u22121|xt) at step t \u22121) for generating new samples. The estimation of the noise is parameterized by a deep learning model. of fonts covers a complete set of CJK code points due to significant effort required to produce a consistent appearance for all CJK characters. Examples of such reference fonts that are free to use include Noto CJK (Noto Authors 2020) and Jigmo (Kamichi 2023). The availability of these comprehensive fonts enables the training of a conditional generative model on a limited set of paired glyphs, which can be applied to the entire set of CJK in Unicode. Many of these models are based on Generative Adversarial Networks (GAN) (Zhang, Zhang, and Cai 2018; Tian 2018; Gao et al. 2019; Yun-Chen Lo 2019; Zhu et al. 2020; Park et al. 2021; Liu and Lian 2023). Some of them leverage additional information about components and composition. While these efforts has seen promising results, they still face quality issues observable upon closer inspection, likely due to the inherent challenges of training GANs. Perhaps the work that is mostly closely related to ours, in that it uses a Denoising Diffusion Probabilistic Model (DDPM) (Ho, Jain, and Abbeel 2020) is (Gui et al. 2023), which introduces Glyph Conditional DDPM (GC-DDPM) that is built on a UNet architecture (Ronneberger, Fischer, and Brox 2015) and generates glyphs from a reference glyph. The key distinction lies in that our method focuses on both typesetting style and brush-based calligraphy generation for font designing and artist, aiming at publisherlevel quality suitable for professional font design. In contrast, GC-DDPM aims to synthesize data to enhance the performance of downstream classification on stroke-based hand-writing recognition tasks through data augmentation. Nonetheless, the design decisions made in GC-DDPM have provided valuable insights for our approach. Diffusion Model Diffusion models (Sohl-Dickstein et al. 2015; Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021; Song et al. 2020) represent a broad spectrum of deep generative models that have established the state-of-the-art in a wide range of challenging tasks. A very limited set of examples from the large body of diffusion works includes computer vision (Nichol and Dhariwal 2021; Song et al. 2020), image synthesis (Ruiz et al. 2023; MidJourney Authors 2024; Podell et al. 2023), natural language processing (Lovelace A A B B B C D E E E E F G A B C D E F G Residual Layer Attention Layer Downsample Layer Upsample Layer Temporal Embeddings Data Embeddings Reference Embeddings + Figure 7: The UNet used in our proposed method. et al. 2024; Wu et al. 2024), and signal processing (Engel et al. 2020; Goel et al. 2022). Given the expansive growth and wide applications, this emerging field is best navigated through comprehensive surveys (Yang et al. 2023; Croitoru et al. 2023). The specifics of how we leverage diffusion models are elaborated upon in the subsequent section. Method Our proposed method employs a diffusion model (SohlDickstein et al. 2015) and we adopt the notation used DDPM (Ho, Jain, and Abbeel 2020) here. As illustrated in Figure 6, the diffusion model consists of two processes. Denoting data as x0 \u223cq(x0), the diffusion process, also known as the forward process, gradually adds noise to destroy data x0, producing a sequence of progressively more noisy samples x1, x2, \u00b7 \u00b7 \u00b7 until the entirely noisy sample xT . Given a schedule of variance, the forward process is a Markov process xt \u223cq (xt|xt\u22121) = N \u0000xt; \u221a1 \u2212\u03b2txt\u22121, \u03b2tI \u0001 , and approximates the posterior. On the other hand, the reverse process gradually removes the noise (\u201cdenoising\u201d) through a Markov process xt\u22121 \u223c p\u03b8 (xt\u22121|xt) = N (x; \u00b5\u03b8 (x, t) , \u03a3\u03b8((x, t)). As the reverse process approximates the prior of data x0, it can be used for generating samples. Note that if \u03b2t is small enough, q (xt|xt\u22121) would be Gaussian, so we have q (xt|x0) = N \u0000xt; \u221a\u03b1tx0, (1 \u2212\u03b1t), I \u0001 where \u03b1 = Qt s=1 \u03b1s and \u03b1t = 1 \u2212\u03b2t. This means we can approximate q in the following way: q (xt\u22121|xt, x0) = N \u0010 xt\u22121; \u02dc \u00b5t (xt, x0) , \u02dc \u03b2tI \u0011 where \u02dc \u00b5t (xt, x0) = \u221a \u03b1t\u22121\u03b2t 1\u2212\u03b1t x0 + \u221a\u03b1t(1\u2212\u03b1t\u22121) 1\u2212\u03b1t xt and \u02dc \u03b2t = 1\u2212\u03b1t\u22121 1\u2212\u03b1t \u03b2t. The training in practice has a few notable realizations as suggested in the DDPM literature (Ho, Jain, and Abbeel 2020; Nichol and Dhariwal 2021). First, the training process can sample arbitrary t instead of going from T to 1. Also, a deep neural network is tasked with predicting the noise added to x0 from partially corrupted xt, since we have x0 = 1 \u221a\u03b1t \u0010 xt \u2212 \u03b2t \u221a1\u2212\u03b1t \u03f5 (xt) \u0011 . We follow the suggestions (Ho, Jain, and Abbeel 2020) that the network should predict \u03f5 (xt). Although, it is recommended by (Nichol and Dhariwal 2021) DDPM can be improved with learned variance, our empirical findings suggest that the base DDPM model\u2019s performance is sufficiently robust, and gains from such modifications could be minimal. For the network architecture, we train a U-Net (Ronneberger, Fischer, and Brox 2015) to predict \u03f5 (xt) as shown in Figure 7. The overall network structure is a typical UNet similar to that of GC-DDPM (Gui et al. 2023). This UNet, taking xt as the input and predicting the noise \u03f5 (xt), consists of several blocks that first downsample and then upsample. These blocks are connected by skip-connections. Each block is composed of residual, attention, downsample and/or upsample layers. To allow controlling of the generation, we feed the reference image r by concatenating it with xt. Also, the timestep t, index of x0, and r are injected through embeddings. After training, we can sample images by iteratively reducing the noise predicted by the U-Net. Experiments Experiment Setup Dataset For training and inference, we use publiclyavailable fonts since they provide a large set of glyphs with a consistent look. Concretely, we use a wide range of free fonts in both printed and calligraphy forms, detailed in Table 1. During training, we construct glyph pairs by looking at the the shared glyphs between the two fonts. Model and Training Details Our network configuration is detailed as follows: each blocks in Figure 7 has respectively 128, 128, 256, 256, 512, 512, 512, 512, 512, 256, 256, 128, 128 channels respectively in their residual layers. All embeddings utilized are uniformly set to 512 dimensions. The network accepts inputs with spatial dimensions of 128 by 128, which are halved in each downsample block, reaching a minimum dimension of 2 \u00d7 2 pixels at block D. For training, Noto Serif TC is employed as the reference font, with all other fonts serving as target fonts for the model Style # / Form a Font Serif (Ming/Song) 43062 / P Noto Serif TC Gothic Typefaces 43098 / P Noto Sans TC Regular Script b 83534 / P TW-Kai Serif (Ming/Song) b 83534 / P TW-Sung Clerical Script 7349 / C AoyagiReisho Semi-cursive Script 8865 / C KouzanMouhitu Semi-cursive Script 7360 / C KouzanGyousho Cursive Script 7741 / C KouzanSousho Serif (Ming/Song) c 50217 / P Noto Serif Tangut Serif (Ming/Song) d 22741 / P NomNaTong a P for Printed From and C for Calligraphy Form. b CNS 11643 Standard. c Only for Tangut Script. d Contains CJK, including Chu Nom Table 1: Fonts we used for training and/or inference. We focuses on two common types in CJK: typefaces for typesetting and calligraphy for artistical purpose. All fonts are open fonts that are free to use. to learn from. The training process contains 3000 epochs, and spans a total duration of one week with 16 A100 GPUs. Overview of Generation Capability To demonstrate the overall generation capability of our proposed method, we present a matrix of generated results in Figure 8, featuring a wide range of fonts from Table 1. Our method applies to both common and rare characters, the latter not present in the training dataset. Specifically, we using Noto Serif TC, a Serif (Ming/Song) font as the reference to generate characters in the desired fonts. These desired fonts include a wide range of styles such as (1) Noto Sans TC (Gothic Typefaces), (2) TW-Kai (Regular Script), (3) TW-Sung (Serif (Ming/Song)), along with calligraphy styles including (4) AoyagiReisho (Clerical Script), (5-6) KouzanMouhitu and KouzanGyousho (Semi-cursive Script), (7) KouzanSousho (Cursive Script). The selection of common characters is based on statistics (Da 2010), while less common characters are randomly sampled from Unicode block CJK Unified Ideographs (Unicode 2023), which contains the first batch of 20, 992 CJK characters in Unicode. It is shown that our proposed method consistently generates high-quality characters across this diverse range of styles, which are coherent and recognizable to native speakers. Converting between Printed and Calligraphy Form Here we explore the model\u2019s ability to generate printed form glyphs, focusing on the subtle distinctions found in finegrain details. As shown in Figure 9. we demonstrate generating Gothic typefaces, Regular script and Song/Ming with different characters: first two are most common characters from statistics on Classical Chinese, middle two are less common ones from Unicode block CJK Unified Ideographs, and last two are extremely rare ones sampled from Unicode block CJK Unified Ideographs Extension B (Unicode 2023) that contains 42, 720 characters extremely rare and historical characters. Our method is capable of accurately generating Figure 8: Our method generating CJK characters in a wide range of styles. The upper block is very-commonly-used characters, while the lower block are characters drawn randomly from a collection of less common ones. In each block, we show reference characters in gray, and generated characters in black. Each row shows different styles our method could generate: they are respectively (1) Gothic typefaces, (2) Regular script, (3) Song/Ming in a different standard (CNS 11643), (4) Clerical script, (5-6) two different Semi-cursive scripts and (7) Cursive script. See Table 1 for font data. characters in a variety of styles, and works for a wide range of characters uniformly. Furthermore, we examine our proposed method\u2019s capability to produce hand-written style, a significantly more challenging task due to the considerable differences from typed fonts. In Figure 9 we showcase the generation of characters in Clerical script, Semi-cursive script and Cursive script. It is revealed that our method can adeptly produce characters in calligraphy form, which are markedly distinct from printed form. This success underscores the model\u2019s advanced understanding and flexibility, and makes it a potent tool for generating handwritten character styles for all CJK characters, a task not done by any human calligrapher yet. Zero-shot generation to Chinese Character-inspired Writing Systems The cultural and historical influence of Sinosphere has lead to other Chinese Character-inspired writing systems beyond Figure 9: Generating Printed form. The generated characters are in black, and three rows show Gothic typefaces, Regular script, Song/Ming in a different standard (CNS 11643) respectively. First two characters are most common, middle two less common and final two extremely rare as detailed in the main text. Figure 10: Generating the same set of characters in Figure 9 in (4) Clerical script, (5-6) two examples of Semi-cursive script and (7) Cursive script. the core CJK (Chinese, Japanese and Korean). Notably, one such system is Chu Nom (omniglot 2023a), consisting of complex and newly created characters that adhere to Chinese Character system. Historically, Chu Nom was used along side Chu Han (Chinese Character) to write Vietnamese from the 13th to the 20th century. Thanks to significant efforts towards its revival, (HNRCV Authors ), we now have a workable references for Chu Nom in the digital publishing era, including NomNaTong font used in this paper. Another example is the Tangut script (omniglot 2023b), created by modelling Chinese Characters loosely and employed to write the now-extinct Tangut language from the 10th to the 15th century. A great challenge of such scripts is the extremely limited resource due to lack of active users, as Vietnamese nowadays is written in Latin-based alphabet and Tangut language has no living speakers. Two fonts in Table 1 represent almost the entirety of free resources available for these scripts, and there is neither demand nor resources to create fonts in other styles for these writing systems. To bridge this gap, we apply our proposed method to the setting of zero-shot generation of new styles for these scripts. Concretely, our method generate regular script Figure 11: Zero-shot generation of new styles for Chu Nom (upper block) and Tangut scripts (lower block). Gray characters are reference ones and black characters are generated by our method in a new style. based on Ming/Song that is trained only with pairs of data in CJK. Remarkably, even our proposed method has not seen these out-of-domain characters, it succeeds in producing high-quality results in a new style for both Chu Nom and Tangut scripts. Model Analysis Comparison with GAN-based Model Diffusion models have recently outperformed GAN-based counterparts in many applications, which have long been the predominant approach in generative modeling. Our work is a clear example of this paradigm shift. As shown in Figure 13, our method could generate characters with visually higher quality. Crucially, it also succeeds in capturing the more global structure of the desired style, a task for which prior models have failed. Style Interpolation The design of the architecture in our method allows free interpolation between different styles through a weighted mixture of data embeddings. As we show in Figure 12, our method facilitates smooth transitions across different styles (printing or handwriting), and the same for font weight from Ultra Thin to Bold and the generated intermediate results are coherent and meaningful. This capability is particularly helpful for creating stylized fonts that blend various styles, offering a painlessly pipeline for all CJK characters. Performance Study: Diffusion Steps v.s. Quality. The diffusion model is trained and inferred with discretization of time steps from the continuous timespace [0, 1] into T steps. Although we train the model with a discretization of T = 1000 steps, it is suggested (Lu et al. 2022; Xue et al. 2024) that effective inference can be achieved with significantly fewer steps. While we do not explicitly employ techniques to optimize for few-step sampling, exploring the minimum number of steps required for accurate generation remains valuable. As shown in Figure 14, five steps are sufficient and could be treated as feasible approximation. This efficiency allows for generating a character Figure 12: Interpolation of the same character. Above: interpolating from Gothic typefaces to Regular script (upper block), then to Semi-cursive script (middle block), then to Cursive script (lower block). Lower: interpolating from Ultra Thin to Bold. In each block above row shows superficially mixing character at pixel level and the lower row is generated by our method. Reference Target Generated Diff zi2zi Ours Generated Diff Reference Target zi2zi Ours Figure 13: Comparing our method with zi2zi(Tian 2018). Ours methods generated characters that are much closer to the target (compare two \u201cDiff\u201d) and visually more smoothing (compare two \u201cGenerated\u201d). Furthermore, in the zoomins below show that our method successfully transfers into a more global structure in the target style, while zi2zi fails and instead copies the reference. in under one second on a T4 GPU, which is considered less powerful and more suited for inference tasks. Vectorization of Generated Characters We finally show the vectorization of generated characters in Figure 15 using the autotrace tool (Yamato et al. 2020). This process is necessary since our method produces bitmap while a font designer would expect a vectorized glyph. The result is satisfactory. It\u2019s important to note that vectorization parameters are highly sensitive to specific fonts and must be carefully adjusted for each font. 2 5 10 25 1000 \u2026 Figure 14: Inference using different number of steps for diffusion. We here show the results with different number of steps. It could be seen that only the most extreme case of two steps we may see a visible artifact. We also compare a wide range of steps by looking at the mean of pixel-wise matching between them and the result from 1000 steps. Figure 15: Example of vectorziation. On the left we show two columns: generated images and vectorized SVG files. On the right are the visualized SVG paths that shows control commands, where solid grey lines for Bezier curve and dash grey lines for moving pen. Painted using SVG Path Visualizer (Dutour 2020). Conclusion In this paper, we propose a diffusion based method that generates glyph in a wide range of target style from a single reference glyph. Our experiments show that our model does well for different styles, making it useful for both font design and artistical purpose. Future work directions could include (1) deepening the integration into font design pipeline, (2) extending to multiple character typesetting and calligraphy where the interaction between the characters could be modeled, and (3) more ambiguously, integrating into powerful text-to-image models to enable accurate label generation. Acknowledgement We thank Marco Raymond Cognetta and Chris Simpkins for their valuable feedback."
},
{
"url": "http://arxiv.org/abs/2404.08273v2",
"title": "Struggle with Adversarial Defense? Try Diffusion",
"abstract": "Adversarial attacks induce misclassification by introducing subtle\nperturbations. Recently, diffusion models are applied to the image classifiers\nto improve adversarial robustness through adversarial training or by purifying\nadversarial noise. However, diffusion-based adversarial training often\nencounters convergence challenges and high computational expenses.\nAdditionally, diffusion-based purification inevitably causes data shift and is\ndeemed susceptible to stronger adaptive attacks. To tackle these issues, we\npropose the Truth Maximization Diffusion Classifier (TMDC), a generative\nBayesian classifier that builds upon pre-trained diffusion models and the\nBayesian theorem. Unlike data-driven classifiers, TMDC, guided by Bayesian\nprinciples, utilizes the conditional likelihood from diffusion models to\ndetermine the class probabilities of input images, thereby insulating against\nthe influences of data shift and the limitations of adversarial training.\nMoreover, to enhance TMDC's resilience against more potent adversarial attacks,\nwe propose an optimization strategy for diffusion classifiers. This strategy\ninvolves post-training the diffusion model on perturbed datasets with\nground-truth labels as conditions, guiding the diffusion model to learn the\ndata distribution and maximizing the likelihood under the ground-truth labels.\nThe proposed method achieves state-of-the-art performance on the CIFAR10\ndataset against heavy white-box attacks and strong adaptive attacks.\nSpecifically, TMDC achieves robust accuracies of 82.81% against $l_{\\infty}$\nnorm-bounded perturbations and 86.05% against $l_{2}$ norm-bounded\nperturbations, respectively, with $\\epsilon=0.05$.",
"authors": "Yujie Li, Yanbin Wang, Haitao Xu, Bin Liu, Jianguo Sun, Zhenhao Guo, Wenrui Ma",
"published": "2024-04-12",
"updated": "2024-04-18",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.CR"
],
"label": "Original Paper",
"paper_cat": "Diffusion AND Model",
"gt": "Since the inception of ImageNet [1] and its associated competitions, researchers have made significant strides in image classification tasks, particularly with deep neural networks achieving notable suc- cess in this domain. Previous endeavors have consistently deepened and broadened networks [2\u20135], employed residual structures [5, 6], and utilized transformer architectures [7\u20139]. These progressively Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ACM MM, 2024, Melbourne, Australia \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn refined models consistently establish new benchmarks across sig- nificant datasets, showcasing exceptional performance. However, these models are trained and evaluated on samples from natural datasets, rendering them susceptible to disruptions. Adversarial attacks adeptly introduce imperceptible perturbations into image data, leading to misclassification by neural networks and yield- ing wholly inaccurate outcomes. Consequently, adversarial attacks have emerged as a common evaluation method for assessing model robustness. Given the crucial role of image classification tasks in fields such as facial recognition [10, 11], medical health [12, 13], and remote sensing [14, 15], the defense against adversarial attacks emerges as a key security concern. Presently, common defensive strategies include adversarial training and image denoising. Notably, the pu- rification approach, which employs diffusion models for denoising, has exhibited promising outcomes. This technique entails utilizing a diffusion model for the generation of image samples through noise addition and subsequent denoising processes, intended for classification or adversarial training purposes. However, it is sus- ceptible to high-intensity adaptive attacks, and the classification performance of the classifier on images post-purification remains suboptimal. We contend that a limiting factor constraining further augmentation of diffusion-based purification efficacy lies in the necessity for images processed by the diffusion model to undergo subsequent inference by discriminative classifiers, that is, in other words, the efficacy of the purification method is partly constrained by the classifier. The noise addition and denoising processes of the diffusion model may disrupt the data distribution of original im- ages, which adheres to the data boundaries learned by the classifier, thereby impeding performance enhancement. Hence, one might inquire, why not utilize the diffusion model alone directly for image classification? Diffusion models represent a contemporary class of powerful image generation models, distinguished by their inference pro- cess comprising forward diffusion and backward denoising stages predominantly. In the forward process, the model systematically introduces Gaussian noise to the image, whereas in the backward process, it undertakes denoising of the perturbed data. Throughout the training phase, Gaussian noise parameters are parameterized utilizing Evidence Lower Bound (ELBO) [16]. The diffusion model utilizes neural networks to predict the Gaussian noise added during the forward process to the samples and compute the loss against the ground truth. Previous research has transformed the Stable Diffusion, a conditional diffusion model, into a generative classifier known as the Diffusion Classifier [17], leveraging Bayesian theorem and computing Monte Carlo estimates for the noise predictions of each class. Li et al. [17] scrutinized its zero-shot performance as a classifier, whereas our study, differently, delves into the adversarial robustness of the Diffusion Classifier. During the inference process of the Diffusion Classifier, each class label undergoes transformation into prompts that are fed arXiv:2404.08273v2 [cs.CV] 18 Apr 2024 ACM MM, 2024, Melbourne, Australia Anonymous Authors into the model, directing it to infer parameterized noise predic- tions and compute losses against the ground truth. Subsequently, unbiased Monte Carlo estimates of the expected losses for each class are derived, and the final classification outcome is obtained through Bayesian theorem. Conceptually, this inference process entails comparing the relative magnitudes of model inference losses under different prompts. Hence, theoretically, it can be posited that adversarial attacks, which involve perturbations constrained by norms added to original images, would not significantly impact the inference outcomes of the Diffusion Classifier. Consequently, we propose the assertion that the Diffusion Classifier exhibits adversar- ial robustness, a proposition substantiated by empirical evidence. Furthermore, we introduce the Truth Maximization optimization method. This approach involves training the model with adver- sarially perturbed input data and conditioning it on text prompts composed of ground-truth labels. The objective is to minimize the prediction loss of parameterized noise in the diffusion process, thereby optimizing model parameters, which enables the model to learn the ability to accurately model image data into the correct cat- egories under adversarial perturbations. The optimization scheme aims to maximize the posterior probability values corresponding to the correct class under Bayesian inference, thereby mitigating significant disruptions in the relative posterior probabilities under attack. The classifier trained using this methodology is denoted as the Truth Maximized Diffusion Classifier (TMDC). Our study focuses on investigating the adversarial robustness of the Diffusion Classifier, a generative classifier based on the diffusion model. We propose the Truth Maximization approach to bolster the Diffusion Classifier\u2019s robustness against adversarial attacks through training. We conducted comparative analyses between the Diffusion Classifier and TMDC against other commonly utilized neural network classifiers, assessing their resilience under strong adaptive combined attacks and classical white-box attacks. Experi- mental findings demonstrate the exceptional adversarial robustness of the Diffusion Classifier relative to alternative classifiers. Further- more, the efficacy of the Truth Maximization optimization method is confirmed. The optimized classifier, TMDC, achieves remarkable testing accuracies of 82.81% (\ud835\udc59\u221e) and 86.05% (\ud835\udc592) on the CIFAR-10 dataset under robust Auto Attack [18] settings with parameters set to \ud835\udf16=0.05 and version set to \u201cplus\u201d, thereby attaining the cur- rent state-of-the-art performance level. The code for our work is available on GitHub [19].",
"main_content": "Since the breakthrough success of AlexNet in 2012 [2], deep neural networks (DNNs) have become pivotal in the realm of computer vision research and application. Subsequent advancements, exemplified by models such as VGG [20], ResNet [6], ViT [9], and their numerous variants have significantly advanced the state-of-the-art in image classification tasks across prominent datasets. However, despite their outstanding performance in conventional tasks, these models are highly vulnerable to adversarial attacks \u2013 techniques devised to mislead deep learning models by introducing imperceptible perturbations to natural data. To assess the robustness of these models, numerous adversarial attack methods have been proposed by previous researchers under both black-box and white-box paradigms [18, 21\u201325], with the aim of effectively compromising neural networks. Common strategies to bolster models against such attacks include adversarial training [23], which involves incorporating adversarial perturbations into the training data to improve the model\u2019s performance under adversarial conditions. Additionally, methods such as adversarial purification [26, 27] have recently gained widespread attention. This approach, focusing on data rather than the model, mitigates adversarial attacks by adding noise into adversarial samples and subsequently denoising them. Nonetheless, such processes may introduce gradient obfuscation issues [28]. 2.2 Generative Classifiers Diverging from discriminative methods that directly delineate data boundaries for image classification, generative approaches, akin to Naive Bayes, first learn the distribution characteristics of image data and then address classification tasks through maximum likelihood estimation modeling. Models such as Naive Bayes [29], EnergyBased Models (EBM) [30, 31], and the Diffusion Classifier [17] are constructed under the generative paradigm. Taking Naive Bayes as an example, it models the input image x and label y to derive the data likelihood \ud835\udc5d(\ud835\udc65|\ud835\udc66), thereby accomplishing classification through maximum likelihood estimation to derive \ud835\udc5d(\ud835\udc66|\ud835\udc65). \ud835\udc5d(\ud835\udc66\ud835\udc56| \ud835\udc65) = \ud835\udc5d(\ud835\udc66\ud835\udc56) \ud835\udc5d(\ud835\udc65| \ud835\udc66\ud835\udc56) \ufffd \ud835\udc57\ud835\udc5d\ufffd\ud835\udc66\ud835\udc57 \ufffd\ud835\udc5d\ufffd\ud835\udc65| \ud835\udc66\ud835\udc57 \ufffd (1) del (JEM) [32], utilizing EBM, reinterprets the stanive classifier of as the joint distribution \ufffd \ufffd \ufffd\ufffd| \ufffd Joint Energy Model (JEM) [32], utilizing EBM, reinterprets the standard discriminative classifier of \ud835\udc5d(\ud835\udc66|\ud835\udc65) as the joint distribution \ud835\udc5d(\ud835\udc65,\ud835\udc66), thereby computing \ud835\udc5d(\ud835\udc65) and \ud835\udc5d(\ud835\udc65|\ud835\udc66) to resolve classification tasks. The Diffusion Classifier [17] simulates data distribution during noise addition and denoising processes, modeling \ud835\udc5d(\ud835\udc66|\ud835\udc65) for image classification by maximizing the Evidence Lower Bound (ELBO) of the log-likelihood [16]. Previous research has demonstrated the zero-shot classification ability of the Diffusion Classifier, while our work further showcases its adversarial robustness against adversarial attacks. 2.3 Fine-tuning of Stable Diffusion As a powerful and widely acclaimed Text-to-Image image generation model, the Stable Diffusion series [7, 33, 34] is often employed directly for tasks such as image classification and image generation. Moreover, fine-tuning the model parameters towards specific imagetext pairs for downstream tasks can yield enhanced performance. However, full-parameter fine-tuning of Stable Diffusion poses challenges such as computational resources constraints, time overhead, and potential catastrophic forgetting. In the domain of large language models, the Lora method [35\u201338] proposed for Transformer architectures [7] is suitable for application to Stable Diffusion. The LoRA method acknowledges that only a small subset of model parameters plays a significant role when targeting specific tasks. Consequently, it becomes feasible to notably diminish the number of training parameters by substituting the highdimensional parameter matrix with a low-dimensional decomposition matrix. If the size of the pre-trained parameter matrix is set to \ud835\udc51\u00d7 \ud835\udc51, it is then replaced with two matrices of size \ud835\udc51\u00d7 \ud835\udc5fand \ud835\udc5f\u00d7 \ud835\udc51 Struggle with Adversarial Defense? Try Diffusion ACM MM, 2024, Melbourne, Australia Figure 1: Simplified Illustration of Lora. Utilizing lowdimensional matrices to approximate high-dimensional ones, where pre-trained weights are frozen, and Lora tensors are employed for training. The memory require during training approaches that of the model\u2019s inference process. This configuration reduces both training time and memory overhead, while effectively mitigating catastrophic forgetting. (\ud835\udc51\u226b\ud835\udc5f), as illustrated in Figure 1. During LoRA fine-tuning, the pre-trained parameters are frozen while the LoRA module undergoes training. Upon completion of training, the Lora parameters are seamlessly integrated with the original parameters, thereby substantially reducing the number of parameters trained during fine-tuning without altering the original parameters. Fine-tuning Stable Diffusion using the LoRA method can drastically reduce training time and significantly alleviate memory requirements. 3 METHODS We adopt the method outlined in the Diffusion Classifier [17] to compute class conditional estimates of images utilizing a pre-trained Stable Diffusion model [34, 39], thereby constructing an image classifier based on the Diffusion Model for the task of image classification with adversarial perturbations. Subsequently, we propose an approach aimed at enhancing the adversarial robustness of the Diffusion Classifier. \u00a73.1 provides an overview of the Diffusion Model, while \u00a73.2 outlines the approach of leveraging the Diffusion Model for image classification tasks, with an elaboration on improving its adversarial robustness in \u00a73.3. 3.1 Diffusion Models Diffusion models [40] represent a class of discrete-time generative model based on Markov chains. The overall process of the model entails both forward noisy passage and backward denoising. Given an input \ud835\udc650, the model performs \ud835\udc47rounds of noise addition. Each round of noise addition, denoted as\ud835\udc5e(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121), follows a Gaussian distribution, ultimately yielding \ud835\udc65\ud835\udc47\u223c\ud835\udc41(0, \ud835\udc3c). During the denoising process, the model learns the noise added in each round to denoise the image back to \ud835\udc650, optionally utilizing low-dimensional text embeddings \ud835\udc66for guidance. The denoising process can be represented as \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61,\ud835\udc66). The entire process can be represented as follows: \ud835\udc5d\ud835\udf03(x0,\ud835\udc66) = \ud835\udc5d(x\ud835\udc47,\ud835\udc66) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121 | x\ud835\udc61,\ud835\udc66) (2) Due to the presence of integrals, directly maximizing \ud835\udc5d(\ud835\udc650) poses significant challenges. Therefore, the objective is transformed into minimizing the ELBO of the log-likelihood value [16]. log\ud835\udc5d\ud835\udf03(\ud835\udc65,\ud835\udc66) \u2265\u2212E\ud835\udf50,\ud835\udc61 \u0002 \ud835\udc64\ud835\udc61\u2225\ud835\udf50\ud835\udf03(x\ud835\udc61,\ud835\udc61,\ud835\udc66) \u2212\ud835\udf50\u22252 2 \u0003 + \ud835\udc36 (3) We consider \ud835\udc65\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udc65+\u221a\ufe011 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udf16\ud835\udc56, E\ud835\udf50,\ud835\udc61 \u0002 \ud835\udc64\ud835\udc61\u2225\ud835\udf50\ud835\udf03(x\ud835\udc61,\ud835\udc61) \u2212\ud835\udf50\u22252 2 \u0003 refers to as diffusion loss in prior studies [41], and \ud835\udf16follows the standard normal distribution \ud835\udc41(0, \ud835\udc3c). Previous work has demonstrated that \ud835\udc36is typically a negligible value, which can be disregarded in practical computations [17, 42], and in practice, researchers [17, 40] often eliminate \ud835\udc4a\ud835\udc61to enhance model performance. Thus, we set \ud835\udc4a\ud835\udc61= 1. In this transformation, we parameterize the Gaussian noise added at each time step, enabling the neural network to predict the noise at each step during the backward denoising process. The Stable diffusion 2.0 we adopt allows for the selective addition of a text prompt, whose low-dimensional embeddings obtained after text encoding can serve as conditional guidance for the denoising process of the neural network. 3.2 Diffusion Classifier In contemporary computer vision literature, prevalent neural network architectures such as Convolutional Neural Networks (CNNs) [2, 43] and Transformer-based architectures [9, 44] typically adopt discriminative approaches for visual classification tasks. These approaches directly delineate the boundaries of different categories of image data through learning. Conversely, the Diffusion model falls within the realm of generative models. When employed as a classifier, it naturally necessitates the utilization of Bayesian theorem. Specifically, it involves calculating the posterior probability given labels \ud835\udc66and modeling of the data \ud835\udc5d(\ud835\udc65| \ud835\udc66): \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc56| \ud835\udc65) = \ud835\udc5d(\ud835\udc66\ud835\udc56) \ud835\udc5d\ud835\udf03(\ud835\udc65| \ud835\udc66\ud835\udc56) \u00cd \ud835\udc57\ud835\udc5d\u0000\ud835\udc66\ud835\udc57 \u0001 \ud835\udc5d\ud835\udf03 \u0000\ud835\udc65| \ud835\udc66\ud835\udc57 \u0001 (4) In the classification process, posterior probabilities corresponding to each class label are computed separately. Therefore, \ud835\udc5d(\ud835\udc66\ud835\udc56) is always equal to 1 \ud835\udc36(where \ud835\udc36represents the total number of classes), allowing for the elimination of \ud835\udc5d(\ud835\udc66\ud835\udc56) during calculations. \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc56| \ud835\udc65) = \ud835\udc5d\ud835\udf03(\ud835\udc65| \ud835\udc66\ud835\udc56) \u00cd \ud835\udc57\ud835\udc5d\ud835\udf03 \u0000\ud835\udc65| \ud835\udc66\ud835\udc57 \u0001 (5) Considering the computational difficulty of \ud835\udc5d\ud835\udf03(\ud835\udc65| \ud835\udc66\ud835\udc56), we substitute it with \ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udc5d(\ud835\udc65| \ud835\udc66\ud835\udc56)). Based upon the derivation of the Evidence Lower Bound, we combine Eq. 5 with Eq. 3 to deduce the formula for posterior probability for each class. \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc56| \ud835\udc65) = exp \b \u2212E\ud835\udc61,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc66\ud835\udc56)\u22252\u0003\t \u00cd \ud835\udc57exp n \u2212E\ud835\udc61,\ud835\udf16 h\r \r\ud835\udf16\u2212\ud835\udf16\ud835\udf03 \u0000\ud835\udc65\ud835\udc61,\ud835\udc66\ud835\udc57 \u0001\r \r2io (6) ACM MM, 2024, Melbourne, Australia Anonymous Authors Figure 2: Overview of the Inference Process of the Diffusion Classifier. Perturbed images are fed into the Diffusion model for both forward noisy processing and backward denoising, with the guiding textual prompt also inputted into the model. The model computes the posterior probabilities corresponding to each class label using Bayes\u2019 theorem, and the maximum posterior probability corresponds to the inference result of the classifier. The objective of the inference process in classification can be transformed into selecting the class corresponding to the minimum average error between the noise inferred by the diffusion model at each sampling point and the ground truth value. Leveraging the Diffusion model, we can compute the \ud835\udf16for each \ud835\udc61\ud835\udc56 (with the default setting of \ud835\udc56\u2208[1, ..., 1000]). Consequently, we can derive unbiased Monte Carlo estimates of the expected value for each class, thus yielding the diffusion loss. \ud835\udc5a\ud835\udc52\ud835\udc4e\ud835\udc5b \r \r \r\ud835\udf16\ud835\udc56\u2212\ud835\udf16\ud835\udf03 \u0010\u221a\ufe01 \u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udc65+ \u221a\ufe01 1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udf16\ud835\udc56,\ud835\udc66 \u0011\r \r \r 2 (7) Combining it with the aforementioned derivations, as depicted in Figure 2, construction of the generative classification model utilizing the diffusion model is achieved, building upon the work by Li et al. [17]. The work demonstrated the remarkable zero-shot performance of the Diffusion Classifier in open-domain classification scenarios without requiring training. In contrast, our study shifts focus towards its adversarial robustness, utilizing Stable Diffusion 2.0. We contend that it exhibits superior resilience against adversarial perturbations in images without requiring training, compared to other neural networks. 3.3 Robust Truth Maximization After conducting comparative experiments under various attacks, we have demonstrated the adversarial robustness of the Diffusion Classifier. Furthermore, we delve into strategies to enhance its robustness, aiming to contribute more to the research on robustness of classification models. To enhance the classifier\u2019s accuracy, according to Eq. 6, the model should be trained to minimize its diffusion loss, E\ud835\udf50,\ud835\udc61 \u0002 \ud835\udc64\ud835\udc61\u2225\ud835\udf50\ud835\udf03(x\ud835\udc61,\ud835\udc61) \u2212\ud835\udf50\u22252 2 \u0003 , when provided with the ground-truth class labels as input. This entails shifting the model\u2019s backward denoising inference values, guided by the true labels, towards the ground truth values. In order to enhance the robustness of diffusion model against adversarial attacks, we draw inspiration from the traditional adversarial training employed in vision classifiers [23]. While generative models cannot directly model the data boundaries between different classes during adversarial sample training, optimizing the model by inputting adversarial samples along with their ground-truth labels and minimizing diffusion loss can improve the model\u2019s capability to model samples augmented by adversarial attacks. Following Eq.6 and Eq.7, we define the training loss as: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60= 1 \ud835\udc47 \ud835\udc47\u22121 \u2211\ufe01 \ud835\udc61=0 \u0002 \u2225\ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52) \u2212\ud835\udf16(\ud835\udc61)\u22252\u0003 (8) Our work utilizes pre-trained Stable Diffusion 2.0 with approximately 354 million parameters. Performing full-parameter training would incur significant memory and time overheads, potentially compromising the pre-trained model\u2019s image modeling capabilities. Thus, we employ the Lora fine-tuning technique to mitigate this issue. By employing a decomposition method that approximates high-dimensional parameter matrices with low-dimensional matrices, we reduce the memory requirements for training to the level of model inference. The trained Lora module is then seamlessly merged with the pre-trained parameters to maintain the original modeling capabilities of the pre-trained model. During the training process, we input augmented samples \ud835\udc65from the training set along with their correct labels\ud835\udc66into Stable Diffusion. Then a pre-trained scheduler is employed for noise injection, and the model predict the noise at each time step, calculate the loss, and then minimize it. The model obtained through this approach is referred to as the Truth Maximized Diffusion Classifier (TMDC) by us. For a detailed outline of the classifier\u2019s training and inference process, please refer to Algorithm 1. 4 EXPERIMENTS We conducted a series of rigorous experiments, employing various black-box and white-box attack methods to assess the adversarial robustness of both the Diffusion Classifier and TMDC. Furthermore, we compared their performance with popular neural networks in the field of computer vision. \u00a74.1 elucidates the detailed experimental setup and training specifics of the models. \u00a74.2 showcases the results of the robustness study of the Diffusion Classifier under Struggle with Adversarial Defense? Try Diffusion ACM MM, 2024, Melbourne, Australia Algorithm 1 Truth Maximized Diffusion Classifier(TMDC) Notation: \ud835\udc4b: dataset, \ud835\udc41: data batch, \ud835\udc65: image, \ud835\udc66: ground-truth label, \ud835\udf16: model prediction, \ud835\udf0f: learning rate, \ud835\udc47: time step, \ud835\udc4a: weights of diffusion model, \ud835\udc3f: List of data class(car, truck, horse, ..., plane) Model Training 1: for \ud835\udc41\u2208\ud835\udc4bdo 2: \ud835\udc65,\ud835\udc66\u2190\u2212\ud835\udc41 3: for \ud835\udc61in \ud835\udc47do 4: \ud835\udf16(\ud835\udc61) \u2190\u2212\ud835\udc60\ud835\udc50\u210e\ud835\udc52\ud835\udc51\ud835\udc62\ud835\udc59\ud835\udc52\ud835\udc5f(\ud835\udc65,\ud835\udc61) 5: end for 6: for \ud835\udc61in \ud835\udc47do 7: \ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66) \u2190\u2212\ud835\udc5a\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc50\ud835\udc61(\ud835\udc65,\ud835\udc66,\ud835\udc61) 8: end for 9: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\u2190\u22121 \ud835\udc47 \u00cd\ud835\udc47\u22121 \ud835\udc61=0 \u0002 \u2225\ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52) \u2212\ud835\udf16(\ud835\udc61)\u22252\u0003 10: \ud835\udc54\u2190\u2212\u2207\ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60 11: \ud835\udc4a\u2190\u2212\ud835\udc4a\u2212\ud835\udf0f\ud835\udc54 12: end for 13: return W Model Inference 1: for \ud835\udc41\u2208\ud835\udc4bdo 2: for \ud835\udc66\u2208\ud835\udc3fdo 3: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61[\ud835\udc66] \u2190\u2212\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61() 4: for \ud835\udc61in \ud835\udc47do 5: \ud835\udf16(\ud835\udc61) \u2190\u2212\ud835\udc60\ud835\udc50\u210e\ud835\udc52\ud835\udc51\ud835\udc62\ud835\udc59\ud835\udc52\ud835\udc5f(\ud835\udc65,\ud835\udc61) 6: \ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66) \u2190\u2212\ud835\udc5a\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc50\ud835\udc61(\ud835\udc65,\ud835\udc66,\ud835\udc61) 7: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61.\ud835\udc4e\ud835\udc5d\ud835\udc5d\ud835\udc52\ud835\udc5b\ud835\udc51 \u0010 \u2225\ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66) \u2212\ud835\udf16(\ud835\udc61)\u22252\u0011 8: end for 9: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61[\ud835\udc66] \u2190\u2212\ud835\udc5a\ud835\udc52\ud835\udc4e\ud835\udc5b(\ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61[\ud835\udc66]) 10: end for 11: \ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc62\ud835\udc59\ud835\udc61\u2190\u2212arg min \ud835\udc66\u2208\ud835\udc3f (Errors [\ud835\udc66]) 12: end for 13: return result several classical white-box attacks. \u00a74.3 presents the model\u2019s performance under Auto Attack, a widely recognized black-box and white-box combined attack method. Lastly, \u00a74.4 entails the ablation study of the TMDC method. 4.1 Experiment Settings Dataset: Considering the characteristics of the dataset and the time overhead incurred by attack algorithms and model training, we opt for CIFAR10 [45] to conduct our experiments. To assess the adversarial robustness of Diffusion Classifier and TMDC, inspired by the method of utilizing a subset of data for detection as proposed in DiffPure [26], and to eliminate testing randomness, we select 1024 data points from the CIFAR10 test set of 10,000 items for evaluation. Moreover, during the training process of the TMDC method, we endeavor to optimize Stable Diffusion 2.0 on the CIFAR10 training set. Practical Implementation Setup: In the naive implementation process of Algorithm 1 for model inference, it necessitates computing over all time steps for each class in the category list for classification. This inevitably imposes a heavy computational burden. Inspired by the upper confidence bound algorithm [45], it is possible to save computation by prematurely discarding class labels that significantly fail to meet classification requirements based on diffusion loss. When dealing with CIFAR10, we adhere to the setup proposed by Li et al. [17], where we initially compute losses for all labels over 50 time steps, discard the top 5 labels with the highest losses, and proceed with computations over 500 time steps for the remaining labels, thereby obtaining the final classification results. Training Setup: We conducted training of the diffusion model on a single A100 (80GB) GPU, with a batch size set to 4. We employed the AdamW optimizer with a learning rate of 1e-6, beta parameters set to (0.9, 0.999), weight decay of 1e-2, and epsilon set to 1e-8. Optimization was performed over 3,000 steps on the CIFAR10 training set, utilizing a constant with warmup learning rate scheduler with a warmup step of 100. For the final experimental evaluation, we selected the checkpoint after 200 steps of optimization, a configuration validated in the ablation study of \u00a74.4. 4.2 White-box Attack Robustness In this section, we employ two widely-used white-box attack algorithms to introduce adversarial perturbations to the test data, thereby evaluating the adversarial robustness of the Diffusion Classifier, shown in \u00a74.2.1. Additionally, we subject TMDC to attacks of the same intensity. In contrast, the remaining models undergo adversarial training for comparison, aiming to assess the effectiveness of our model optimization compared to the widely-used adversarial training on discriminative classifiers, as described in \u00a74.2.2. Adversarial Attacks: This experiment employs two white-box attack algorithms, namely FGSM [22] and PGD [23]. We use a pretrained ResNet50 model on CIFAR10 as the attack generator to introduce perturbations into the test data. The FGSM algorithm, contrary to the gradient descent method used in neural network training optimization, adds smooth perturbations, denoted as \ud835\udf16, along the direction of the gradient constrained by the \ud835\udc59\u221e\u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a to maximize the loss function, leading to misclassification by the model. The PGD attack method is an improved version of FGSM, performing multiple iterations based on single-step attacks under the \ud835\udc59\u221e\u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5ato achieve better attack effectiveness. In our experiment, we set the \ud835\udf16parameter for FGSM and PGD attacks to 0.05, while the number of iterations for PGD attacks is set to 40. 4.2.1 Comparison of White-box Adversarial Robustness. We extracted 1024 samples from the CIFAR10 dataset and introduced adversarial perturbations using FGSM and PGD. Then, we utilized a rapid algorithm for staged label elimination to allow the classifier to infer predicted labels and calculate the model\u2019s accuracy under adversarial attacks, in conjunction with the ground-truth labels, for comparison with other popular neural networks trained on CIFAR10. The experimental results are presented in Table 1. Under the FGSM attack, the accuracy of ResNet50 dropped from 90.51% to 39.77%, Vit_B/16 from 98.10% to 23.69%, and WideResNet50 from 98.05% to 22.40%, all experiencing decreases of over 50%. Vit and WideResNet50 even dropped by over 75%. Conversely, the untrained Diffusion Classifier achieved an accuracy of 89.44% on the clean data, dropping to 50.17% under the FGSM attack, with a significantly lower decrease in accuracy compared to other models. Under the PGD attack algorithm with \ud835\udf16set to 0.05 and 40 iterations, the accuracy of all other models dropped to 0.0%, whereas that of ACM MM, 2024, Melbourne, Australia Anonymous Authors Table 1: Comparison of White-box Adversarial Robustness. ResNet50, Vit_B/16, and WideResNet50 were all trained on the CIFAR10 dataset, and then subjected to robustness testing on test data with adversarial perturbations. In contrast, the Diffusion Classifier was directly tested on the test set with added attacks. baselines Clean FGSM(\ud835\udf16= 0.05) PGD(\ud835\udf16= 0.05,\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f= 40) ResNet50 [6] 90.51% 39.77% 0.0% Vit_B/16 [9] 98.10% 23.69% 0.0% WideResNet50 [5] 98.05% 22.40% 0.0% Diffusion Classifier(OURS) 89.44% 50.17% 42.30% Table 2: Comparison between Truth Maximization and Adversarial Training. ResNet50, Vit_B/16, and WideResNet50 all underwent multiple rounds of adversarial training on data augmented with the PGD algorithm. Training was halted once the model\u2019s classification performance stabilized, after which robustness testing was conducted using the test set. In contrast, the Diffusion Classifier was subjected to robustness testing after optimization through Truth Maximization on the same augmented data. Baselines PGD(\ud835\udf16= 0.05,\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f= 40) ResNet50 45.39% Vit_B/16 39.72% WideResNet50 45.77% TMDC(OURS) 70.02% the Diffusion Classifier only decreased to 42.30%. These experimental results demonstrate the outstanding robustness of the Diffusion Classifier against white-box adversarial attacks when compared to other neural networks, even in an untrained state. 4.2.2 Comparison between Truth Maximization and Adversarial Training. We trained Stable Diffusion using data augmented with PGD adversarial perturbations, resulting in the model TMDC. Meanwhile, the other models underwent adversarial training using the PGD algorithm [23]. Then we conducted a comparative study of the robustness of each model on the test set under PGD attacks. The experimental results are presented in Table 2. After undergoing adversarial training, the accuracy of ResNet50 under PGD attacks increased from its original 0.0% to 45.39%, while Vit_B/16 rose to 39.72%, and WideResNet50 increased to 45.77%. This demonstrates that adversarial training can effectively enhance the adversarial robustness of widely used discriminative classifiers. Meanwhile, TMDC achieved an accuracy of 70.02% under the same adversarial attacks, significantly outperforming the commonly used adversarial training methods in enhancing model robustness. Thus, compared to discriminative classifiers, which can conveniently improve robustness through adversarial training, Diffusion Classifier as a generative classifier also possesses meaningful Truth Maximization optimization methods. 4.3 Auto Attack Robustness In this section, we employ the Auto Attack method to evaluate the adversarial robustness of both the Diffusion Classifier and TMDC under combined attacks. For rigorous conclusions, we compare them with discriminative classifiers and introduce the JEM generative classifier for comparative experiments. Furthermore, we incorporate another widely recognized approach for combating adversarial attacks, DiffPure, into our experiments. Adversarial Attack: Auto Attack [18] is a combined adversarial attack method that encompasses both black-box and white-box attacks. It improves upon PGD by employing the APGD algorithm, which automatically adjusts the step size. It rapidly moves with a larger step size and gradually reduces the step size to maximize the objective function locally. If the step size halving is detected, it restarts at the local maximum, thereby mounting more effective attacks against neural networks. APGD comes in different versions depending on the target loss function, and Auto Attack combines various versions of APGD along with Square attack (black-box) and FAB attack (white-box), forming a combination of black-box and white-box attacks. In this experiment, we set the version of Auto Attack to \u201cplus\u201d (a combination of all types of algorithms) and use both \ud835\udc592 and \ud835\udc59\u221enorms to constrain the perturbation. Stable Diffusion 2.0 is optimized using the Truth Maximization method on data augmented with Auto Attack. Subsequently, experiments are conducted on the test set augmented with the same attacks to assess its adversarial robustness against Auto Attack. The remaining comparative approaches are subjected to attacks using the same algorithm, with all groups utilizing adversarial samples generated by a pre-trained ResNet50 model on CIFAR10. The experimental results are presented in Table 3. Under the \ud835\udc59\u221enorm-constrained Auto Attack, the accuracy of WideResNet50 and Vit_B/16 on the test set plummeted to 0.0%, while that of ResNet50 dropped to 0.5%. However, after purifying the perturbations through DiffPure, the accuracy of ResNet50 reached 57.94%. Moreover, JEM achieved an accuracy of 10.13% under Auto Attack. In contrast, the Diffusion Classifier exhibited excellent robustness against this combination of black-box and white-box attacks, achieving an accuracy of 79.52% without any training. After optimization using the Truth Maximization method, TMDC\u2019s accuracy further increased to 82.81%. Meanwhile, under the \ud835\udc592 norm-constrained Auto Attack, the accuracy of WideResNet50 on the test set was 23.98%, while Vit_B/16 Struggle with Adversarial Defense? Try Diffusion ACM MM, 2024, Melbourne, Australia Table 3: Comparison of Auto Attack Robustness. Under both \ud835\udc3f\u221eand \ud835\udc3f2 norm constraints, Auto Attack was conducted with epsilon set to 0.05 and seed set to 2024. In the experiments involving DiffPure, the samples processed through DiffPure were re-fed into ResNet50, which was pre-trained on CIFAR10, for testing. And the ResNet50 model used in this experiment shares the same weight values as the models in the comparative experiment. baselines Auto Attack(\ud835\udc59\u221e\u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a) Auto Attack(\ud835\udc592 \u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a) DiffPure [26] 57.94% 75.34% JEM [32] 10.13% 26.56% WideResNet50 [5] 0.0% 23.98% Vit_B/16 [9] 0.0% 31.42% ResNet50 [6] 0.50% 37.52% Diffusion Classifier(OURS) 79.52% 81.18% TMDC(OURS) 82.81% 86.05% achieved 31.42%, and ResNet50 reached 37.52%. However, after purifying the perturbations through DiffPure, the accuracy of ResNet50 rose to 75.34%. JEM attained an accuracy of 26.56% under this norm of Auto Attack. The untrained Diffusion Classifier achieved an accuracy of 81.18%, while TMDC reached 86.05%. Under both \ud835\udc59\u221e and \ud835\udc592 norm-constrained Auto Attack scenarios, classifiers constructed from Stable Diffusion 2.0 demonstrated superior adversarial robustness compared to other comparative models. Furthermore, compared to the strategy of purifying the data and refeeding it into ResNet50 after DiffPure, the Diffusion Classifier also exhibited higher classification performance. 4.4 Ablation Study In order to validate the effectiveness of the Truth Maximization optimization approach employed in our work for the Diffusion Classifier, as well as the correctness of the selection settings for checkpoints during the training process, we conducted an ablation study. \u00a74.4.1 presents our investigation into Truth Maximization, while \u00a74.4.2 outlines our experiments concerning different checkpoint selection. 4.4.1 Ablation on Truth Maximization. In this experiment, we employ PGD(\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f= 40), Auto Attack (\ud835\udc59\u221e), and Auto Attack (\ud835\udc592) as three adversarial attack methods. For each attack, we randomly select 5 sets of different seeds to sample data from the test set. We evaluate the accuracy metrics of the Diffusion Classifier and TMDC in performing classification tasks under these attacks, and report the averaged test results. The experimental outcomes are illustrated in Figure 3. Under three types of adversarial attacks, the Truth Maximization approach consistently yields effective optimizations for the Diffusion Classifier. Specifically, under the PGD attack, models optimized through Truth Maximization demonstrate a substantial enhancement in average testing accuracy, soaring from 42.32% to 70.08%, marking an increase of 65.59%. Notably, the model\u2019s adversarial robustness under this attack type experiences significant improvement. Moreover, under the two norm-constrained Figure 3: Comparison between Diffusion Classifier and TMDC. The PGD attack is conducted with parameters set as follows: \ud835\udf16is set to 0.05, and the attack runs for 40 iterations, in accordance with Section 4.2. As for Auto Attack, its version is uniformly designated as \u201cplus\u201d, with \ud835\udf16set to 0.05 and the seed initialized with five sets of distinct random numbers. Auto Attack scenarios, the accuracy elevates from 79.11% (\ud835\udc59\u221e) and 81.19% (\ud835\udc592) to 82.79% and 86.13%, respectively, showcasing notable optimizations under combined attacks. Further corroborated by the experimental findings in \u00a74.2.2, TMDC consistently achieves a higher accuracy of 70.02% under PGD attack compared to other classification models using adversarial training. These experimental outcomes collectively underscore the efficacy of the Truth Maximization methodology in enhancing the adversarial robustness of the Diffusion Classifier. Furthermore, in contrast to adversarial training, applying Truth Maximization to diffusion models exhibits superior performance. ACM MM, 2024, Melbourne, Australia Anonymous Authors Figure 4: Study on Checkpoint Selection. For Auto Attack, the version is uniformly set to \u201cplus\u201d, with a value of 0.05 for parameter \ud835\udf16, and the seed is fixed at 2024. Throughout the Truth Maximization training process, a learning rate scheduler employing \u201cconstant with warmup\u201d strategy is employed, wherein the learning rate is set to 1e-6, and the warm-up steps are configured to be 100. Both sets of experiments undergo optimization for 3000 steps. 4.4.2 Ablation on Checkpoint Selection. In this experiment, we conducted trials under two norm-constrained Auto Attack scenarios. The selection of test dataset for both sets of experiments employed the same random seed. Moreover, all experiments underwent optimization using the Truth Maximization methodology for 3000 steps. During this optimization process, checkpoints were saved every 100 steps for the first 500 steps, followed by checkpoints saved every 1000 steps thereafter. The settings for optimizer and learning rate scheduler remained consistent with those outlined in \u00a74.1, ensuring the validity and coherence of the experimental setup. The experimental results are shown in Figure 4. Truth Maximization is a method employed to enhance the classification capability of the Diffusion Classifier by minimizing diffusion loss, which serves as the objective function, when training the model with both the ground-truth labels of the training set and perturbed images. This approach aims to strengthen the diffusion model\u2019s ability to model enhanced images conditioned on the correct labels. However, it does not directly improve the model\u2019s capacity to model boundaries between different data types. Therefore, we must consider the impact of optimization steps on the classifier\u2019s ultimate performance. As depicted in Figure 4, when checkpoints are saved every 100 steps, the model\u2019s accuracy on the test set reaches its peak around the 200th step checkpoint, with the test accuracy reaching 86.05% and 82.81% respectively, and gradually decreases thereafter. Furthermore, in the experimental group under \ud835\udc59\u221enorm constraints, the model\u2019s accuracy at the 3000th step is even lower than before optimization. After 200 steps of training, the model becomes overfitted to the training data, resulting in weakened classification performance due to blurred diffusion loss boundaries guided by different class labels in the test data. Consequently, after Truth Maximization training, we select the model from the 200th step checkpoint for subsequent testing to achieve relatively fair classifier performance. 5 DISCUSSION Collaboration with Purification: The generation of adversarial samples for image classification or adversarial training using diffusion model is subject to uncertainty stemming from shifts in image data distribution, rendering it vulnerable to high-intensity adaptive attacks. This vulnerability is partially attributed to constraints imposed by the performance of classifiers used after generating purified images. However, comparative experiments demonstrate that purification-based methods consistently outperform other baseline approaches. Therefore, anchored in the purification paradigm, developing a classifier based on diffusion models, that leverages the statistical uncertainty of data and utilizes different class posterior probabilities for classification, holds promise for bolstering adversarial resilience. Decoupling from Training: Despite achieving excellent adversarial robustness, our proposed TMDC method remains constrained by the necessity of training on adversarial samples, requiring a dedicated training set for the diffusion model, thereby posing inefficiencies in terms of computational resources and time. To mitigate these challenges, decoupling from training, segmenting the inference process of the diffusion model into multiple stages, and optimizing the sampling strategy offer a fertile ground for exploration. Such an approach not only enhances the model\u2019s classification performance under adversarial attacks but also improves inference efficiency, consequently conserving computational resources and time. 6 CONCLUSION In light of the widespread vulnerability observed in commonly used visual neural network classifiers when subjected to adversarial attacks, we conducted thorough assessments and found that the Diffusion Classifier, derived from a robust generative model, demonstrates excellent adversarial robustness. Utilizing the diffusion model as a conditional density estimator, we modeled image data guided by text prompts through the combination of Evidence Lower Bound (ELBO) and unbiased Monte Carlo estimation, leveraging Bayesian theorem to construct the classifier. Additionally, we propose a model optimization approach termed Truth Maximization, which, through training guided by ground-truth labels, further enhances the adversarial robustness of the pre-trained Stable Diffusion-based generative classifier. Models trained using this approach are denoted as Truth Maximization Diffusion Classifier(TMDC). Through empirical evaluation against classical whitebox attacks and widely employed strong combined adaptive attacks like Auto Attack, we demonstrated the exceptional adversarial robustness of the Diffusion Classifier even in the absence of explicit training. Moreover, the optimized TMDC model achieved state-ofthe-art performance against strong white-box attacks and combined adaptive attacks on the CIFAR-10 dataset."
},
{
"url": "http://arxiv.org/abs/2404.15449v1",
"title": "ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning",
"abstract": "The rapid development of diffusion models has triggered diverse applications.\nIdentity-preserving text-to-image generation (ID-T2I) particularly has received\nsignificant attention due to its wide range of application scenarios like AI\nportrait and advertising. While existing ID-T2I methods have demonstrated\nimpressive results, several key challenges remain: (1) It is hard to maintain\nthe identity characteristics of reference portraits accurately, (2) The\ngenerated images lack aesthetic appeal especially while enforcing identity\nretention, and (3) There is a limitation that cannot be compatible with\nLoRA-based and Adapter-based methods simultaneously. To address these issues,\nwe present \\textbf{ID-Aligner}, a general feedback learning framework to\nenhance ID-T2I performance. To resolve identity features lost, we introduce\nidentity consistency reward fine-tuning to utilize the feedback from face\ndetection and recognition models to improve generated identity preservation.\nFurthermore, we propose identity aesthetic reward fine-tuning leveraging\nrewards from human-annotated preference data and automatically constructed\nfeedback on character structure generation to provide aesthetic tuning signals.\nThanks to its universal feedback fine-tuning framework, our method can be\nreadily applied to both LoRA and Adapter models, achieving consistent\nperformance gains. Extensive experiments on SD1.5 and SDXL diffusion models\nvalidate the effectiveness of our approach. \\textbf{Project Page:\n\\url{https://idaligner.github.io/}}",
"authors": "Weifeng Chen, Jiacheng Zhang, Jie Wu, Hefeng Wu, Xuefeng Xiao, Liang Lin",
"published": "2024-04-23",
"updated": "2024-04-23",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "Diffusion AND Model",
"gt": "In recent years, the field of image synthesis has experienced a re- markable revolution with the emergence of diffusion models. These powerful generative diffusion models, exemplified by significant milestones such as DALLE-2 [23] and Imagen [26], have completely reshaped the landscape of text-to-image (T2I) generation. More- over, the development of these models has also given rise to many related application tasks, such as image editing [13], controllable image generation [21, 42], and so on. Among these, the identity- preserving text-to-image generation received widespread attention due to its broad application scenarios like E-commerce advertis- ing, AI portraits, image animation, and virtual try-it-on. It aims to generate new images about the identity of a specific person in the reference images with the guidance of the textual prompt. There are numerous advanced research works on this task. Early works resort to the low-rank (LoRA) [7] to adapt the pre-trained text-to-image diffusion model to the given a few reference portrait images and achieve recontextualization of the particular identity. Recently, IP-Adapter [39] achieved impressive personalized por- trait generation by inserting an adapter model into the attention module of the diffusion model and fine-tuning using a high-quality large-scale dataset of facial images. However, despite these achieve- ments, these methods still fall short in several aspects: (i) They cannot achieve accurate identity preservation. Existing methods typically employ a mean squared error (MSE) loss during training, which is unable to explicitly learn image generation that faithfully captures the characteristics of the reference portrait as shown in Fig.6. (ii) The generated image tends to lack appeal, especially when enforcing identity consistency. For example, the state-of-the-art method InstantID [32] introduces an extra IdentityNet to retain the information of reference portrait. While high fidelity, such a strict constraint is also prone to generating rigid images or characters with distorted/unnatural limbs and poses as depicted in Fig.4. (iii) Existing methods either rely on LoRA [7] or Adapter [39] to achieve ID-T2I generation and lack a general method that is compatible with these two paradigms. In this work, drawing inspiration from the recent advancements in feedback learning within the diffusion model [1, 35, 37], we present ID-Aligner, a framework to boost the identity image gen- eration performance with specially designed reward models via feedback learning. Specifically, we introduce an identity consistency reward tuning to boost identity preservation. It employs the face detection model along with the face recognition model as the re- ward model to measure identity consistency and provide specialized feedback on identity consistency, which enables superior identity consistency during the recontextualization of the portrait in the reference images. In addition, to enhance the aesthetic quality of the identity generation, we further introduce an identity aesthetic reward tuning, which exploits a tailored reward model trained with human-annotated preference feedback data and automatically con- structs character structure feedback data to steer the model toward the aesthetic appealing generation. Our method is very flexible and can be applied to not only the adapter-based model but also the LoRA-based model and achieve a consistent performance boost in both identity preservation and aesthetic quality. We also observe the significant acceleration effect with the LoRA-based model, fa- cilitating its wide application. Extensive experiments demonstrate the superiority of our method upon the existing method, such as IP-Adapter [39], PhotoMaker [11], and InstantID [32]. Our contri- butions are summarized as follows: \u2022 We present ID-Aligner, a general feedback learning frame- work to improve the performance of identity-preserving text-to-image generation in both identity consistency and aesthetic quality. To the best of our knowledge, this is the first work to address this task through feedback learning. \u2022 We introduce a universal method that can be applied to both the LoRA-based model and the Adapter-based model. The- oretically, our approach can boost all the existing training- based identity-preserving text-to-image generation methods. \u2022 Extensive experiments have been conducted with various ex- isting methods such as IP-Adapter, PhotoMaker, and Instan- ceID, validating the effectiveness of our method in improving identity consistency and aesthetic quality.",
"main_content": "Text-to-Image Diffusion Models. Recently, diffusion models [6, 30] have showcased remarkable capabilities in the realm of textto-image (T2I) generation. Groundbreaking works like Imagen [26], GLIDE [15], and DALL-E2 [23] have emerged, revolutionizing textdriven image synthesis and reshaping the landscape of the T2I task. Notably, the LDM [24] model, also known as the stable diffusion model, has transformed the diffusion process from pixel space to latent space, significantly improving the efficiency of training and inference. Building upon this innovation, the Stable Diffusion XL (SDXL) [20] model has further enhanced training strategies and achieved unprecedented image generation quality through parameter scaling. The development of these models has also triggered various applications, including image editing [2, 5, 13, 31], controllable image generation [9, 14, 42], etc. Identity-Preserving Image Generation. ID-preserving image Generation [3, 11, 29, 32, 38, 39] has emerged as a significant application of text-to-image generation, capturing widespread attention due to its diverse range of application scenarios. The primary objective of this application is to generate novel images about a particular identity of one or several reference images guided by textual prompts. Unlike the conventional text-generated image task, it is crucial not only to ensure the performance of prompt ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning ACM MM, 2024, Melbourne, Australia Reference Face Prompt: a man hold a cake \ud835\udc65! Face Detection \ud835\udcdb\ud835\udc8a\ud835\udc85_\ud835\udc94\ud835\udc8a\ud835\udc8e+ \ud835\udcdb\ud835\udc8a\ud835\udc85_\ud835\udc82\ud835\udc86\ud835\udc94 Aesthetic Reward Adapter Adapter Adapter Adapter \ud83d\udd25 \ud83d\udd25 \ud83d\udd25 \ud83d\udd25 \u2744 LDM Face Encoder Face Encoder (b) ID-Aligner For Adapter Model LDM Reference Images Prompt: a woman read a book \u2744 \ud83d\udd25LoRA Face Detection Denoise Latents \ud835\udcdb\ud835\udc8a\ud835\udc85_\ud835\udc8d\ud835\udc90\ud835\udc93\ud835\udc82+ \ud835\udcdb\ud835\udc8a\ud835\udc85_\ud835\udc82\ud835\udc86\ud835\udc94+ \ud835\udcdb\ud835\udc8a\ud835\udc85_\ud835\udc94\ud835\udc8a\ud835\udc8e Face Detection Aesthetic Reward + Face Encoder Face Encoder (a) ID-Aligner For LoRA Model Reference Face Figure 2: The overview of the proposed ID-Aligner. Our method exploits face detection and face encoder to achieve identity preservation via feedback learning. We further incorporated the aesthetic reward model to improve the visual appeal of the generation results. Our method is a general framework that can be applied to both LoRA and Adapter methods. comprehension and generation fidelity but also to maintain consistency of the ID information between the newly generated images and the reference images. Few-shot methods [7, 12, 25, 29, 33] attempted to to finetune the diffusion model given several reference images to learn the ID features. However, this approach requires specific fine-tuning for each character, which limits its flexibility. PhotoMaker [11] achieves ID preservation by fine-tuning part of the Transformer layers in the image encoder and merging the class and image embeddings. IP-Adapter-FaceID [39] uses face ID embedding in the face recognition model instead of CLIP [22] image embedding to maintain ID consistency. Similarly, InstantID [32] uses a FaceEncoder to extract semantic Face Embedding and inject the ID information via Decoupled Cross-Attention. In addition, an IdentityNet is designed to introduce additional spatial control. In contrast to these approaches, our method relies on feedback learning, eliminating the need for intricate network structures. It offers exceptional versatility and effectiveness, seamlessly adapting to various existing methods while significantly enhancing ID Preservation. Human Feedback for Diffusion Models. Inspired by the success of reinforcement learning with human feedback (RLHF) in the field of Large Language Models (LLMs) [16\u201318], researchers [1, 34, 35] have tried to introduce feedback-based learning into the field of text-to-image generation. Among these, DDPO [1] employs reinforcement learning to align the diffusion models with the supervision provided by the additional reward models. Different from DDPO, HPS [34, 35] exploits the reward model trained on the collected preference data to filter the preferred data and then achieve feedback learning via a supervised fine-tuning manner. Recently, ImageReward [37] proposes a ReFL framework to achieve preference fine-tuning, which performs reward scoring on denoised images within a predetermined diffusion model denoising step range through a pre-trained reward model, backpropagates and updates the diffusion model parameters. Recently, UniFL [40] proposes a unified framework to enhance diffusion models via feedback learning. Inspire by these, in this paper, we propose a reward feedback learning algorithm that focuses on optimizing ID-T2I models. ACM MM, 2024, Melbourne, Australia Chen and Zhang, et al. 3 METHOD We introduce ID-Aligner, a pioneering approach that utilizes the feedback learning method to enhance the performance of identity (ID) preserving generation. The outline of our method can be seen in Fig. 2. We resolve the ID-preserving generation via a reward feedback learning paradigm to enhance the consistency with a reference face image and aesthetic of generated images. 3.1 Text-to-Image Diffusion Model Text-to-image diffusion models leverage diffusion modeling to generate high-quality images based on textual prompts via the diffusion model, which generates desired data samples from Gaussian noise through a gradual denoising process. During pre-training, a sampled image \ud835\udc65is first processed by a pre-trained VAE [4, 10] encoder to derive its latent representation \ud835\udc67. Subsequently, random noise is injected into the latent representation through a forward diffusion process, following a predefined schedule {\ud835\udefd\ud835\udc61}\ud835\udc47. This process can be formulated as \ud835\udc67\ud835\udc61= \u221a\ud835\udefc\ud835\udc61\ud835\udc67+ \u221a1 \u2212\ud835\udefc\ud835\udc61\ud835\udf16, where \ud835\udf16\u2208N (0, 1) is the random noise with identical dimension to \ud835\udc67, \ud835\udefc\ud835\udc61= \u00ce\ud835\udc61 \ud835\udc60=1 \ud835\udefc\ud835\udc60 and \ud835\udefc\ud835\udc61= 1 \u2212\ud835\udefd\ud835\udc61. To achieve the denoising process, a UNet \ud835\udf16\ud835\udf03is trained to predict the added noise in the forward diffusion process, conditioned on the noised latent and the text prompt \ud835\udc50. Formally, the optimization objective of the UNet is: L(\ud835\udf03) = E\ud835\udc67,\ud835\udf16,\ud835\udc50,\ud835\udc61[||\ud835\udf16\u2212\ud835\udf16\ud835\udf03( \u221a\ufe01 \ud835\udefc\ud835\udc61\ud835\udc67+ \u221a\ufe01 1 \u2212\ud835\udefc\ud835\udc61\ud835\udf16,\ud835\udc50,\ud835\udc61)||2 2]. (1) 3.2 Identity Reward Identity Consistency Reward: Given the reference image \ud835\udc65ref 0 and the generated image \ud835\udc65\u2032 0. Our objective is to assess the ID similarity of the particular portrait. To achieve this, we first employ the face detection model FaceDet to locate the faces in both images. Based on the outputs of the face detection model, we crop the corresponding face regions and feed them into the encoder of a face recognition model FaceEnc. This allows us to obtain the encoded face embeddings for the reference face Eref and the generated face Egen, i.e., Eref = FaceEnc(FaceDet(\ud835\udc65ref 0 )), (2) Egen = FaceEnc(FaceDet(\ud835\udc65\u2032 0)). (3) Subsequently, we calculate the cosine similarity between these two face embeddings, which serves as the measure of ID retention during the generation process. We then consider this similarity as the reward signal for the feedback tuning process as follows: \u211c\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5a(\ud835\udc65\u2032 0,\ud835\udc65ref 0 ) = cose_sim(Egen, Eref). (4) Identity Aesthetic Reward: In addition to the identity consistency reward, we introduce an identity aesthetic reward model focusing on appealing and quality. It consists of human preference of appeal and a reasonable structure. First, we train a reward model with self-collected human annotation preference dataset that can score the image and reflect human preference over the appeal, as is shown in right of Fig.3. We employ the pretrained model provided by ImageReward [37] and finetune it with the following loss: L\ud835\udf03= \u2212\ud835\udc38(\ud835\udc50,\ud835\udc65\ud835\udc56,\ud835\udc65\ud835\udc57)\u223cD [\ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udf0e(\ud835\udc53\ud835\udf03(\ud835\udc65\ud835\udc56,\ud835\udc50) \u2212\ud835\udc53\ud835\udf03(\ud835\udc65\ud835\udc57,\ud835\udc50)))]. (5) Feedback on Structure ( Automate Constructed ) Worse Better/Preferred Unpreferred Feedback on Appeal ( Human Annotation ) Figure 3: The illustration of the aesthetic feedback data construction. We take an \u201cAI + Expert\u201d way to generate the feedback data. Left: The automatic data construction for the feedback data on the character structure generation. We resort to ControlNet [42] to manually generate the structure-distorted negative samples. Right: Human annoatated preference data over images. This loss function is based on comparison pairs between images, where each comparison pair contains two images (\ud835\udc65\ud835\udc56and \ud835\udc65\ud835\udc57) and prompt \ud835\udc50. \ud835\udc53\ud835\udf03(\ud835\udc65,\ud835\udc50) represents the reward score given an image \ud835\udc65and a prompt \ud835\udc50. We therefore term \ud835\udc53\ud835\udf03as \u211c\ud835\udc4e\ud835\udc5d\ud835\udc5d\ud835\udc52\ud835\udc4e\ud835\udc59for appealing reward. In addition, we design a structure reward model that can distinguish distorted limbs/body from natural one. To train a model that can access the whether the structure of image is reasonable or not, we collect a set of text-image pairs containing positive and negative samples. Specifically, we use the images from LAION [28] filtered with human detector. We then use a pose estimation model to generate a pose, which can be treat as undistored human structure. We then randomly twiste the pose and utilize ControlNet [42] to generate distored body as negative samples, as is shown in left of Fig.3. Once the positive and negative pairs are available, similarly, we train the structure reward model with the same loss of Eq. 5 as well and term structure reward model as \u211c\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc50\ud835\udc61. Then, the identity aesthetic reward model is defined as \u211c\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60(\ud835\udc65,\ud835\udc50) = \u211c\ud835\udc4e\ud835\udc5d\ud835\udc5d\ud835\udc52\ud835\udc4e\ud835\udc59(\ud835\udc65,\ud835\udc50) + \u211c\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc50\ud835\udc61(\ud835\udc65,\ud835\udc50). (6) 3.3 ID-Preserving Feedback Learning In the feedback learning phase, we begin with an input prompt \ud835\udc50, initializing a latent variable\ud835\udc65\ud835\udc47at random. The latent variable is then progressively denoised until reaching a randomly selected timestep \ud835\udc61. At this point, the denoised image \ud835\udc65\u2032 0 is directly predicted from \ud835\udc65\ud835\udc61. The reward model obtained from the previous phase is applied to this denoised image, generating the expected preference score. This preference score enables the fine-tuning of the diffusion model to align more closely with our ID-Reward that reflects identity consistency and aesthetic preferences: L\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5a= E\ud835\udc50\u223c\ud835\udc5d(\ud835\udc50)E\ud835\udc65\u2032 0\u223c\ud835\udc5d(\ud835\udc65\u2032 0|\ud835\udc50) [1 \u2212\u211c\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5a(\ud835\udc65\u2032 0,\ud835\udc65\ud835\udc5f\ud835\udc52\ud835\udc53 0 )], (7) L\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60= E\ud835\udc50\u223c\ud835\udc5d(\ud835\udc50)E\ud835\udc65\u2032 0\u223c\ud835\udc5d(\ud835\udc65\u2032 0|\ud835\udc50) [\u2212\u211c\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60(\ud835\udc65\u2032 0,\ud835\udc50)]. (8) ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning ACM MM, 2024, Melbourne, Australia Algorithm 1 ID-Preserving Reward Feedback Learning for Adapter model 1: Dataset: Identity preservation generation text-image dataset D = {(txt1, ref_face1), ...(txt\ud835\udc5b, ref_face\ud835\udc5b)} 2: Input: LDM with pre-trained adapter parameters \ud835\udc640, face detection model FaceDet, encoder of a face recognition model FaceEnc. 3: Initialization: The number of noise scheduler time steps \ud835\udc47, add noise timestep \ud835\udc47\ud835\udc4e, denoising time step \ud835\udc61. 4: for data point (txt\ud835\udc56, ref_face\ud835\udc56) \u2208D do 5: \ud835\udc65\ud835\udc47\u2190RandNoise // Sample a Guassion noise. 6: \ud835\udc61\u2190Rand(\ud835\udc471, \ud835\udc472) // Pick a random denoise time step \ud835\udc61\u2208 [\ud835\udc471,\ud835\udc472] 7: for \ud835\udc57= \ud835\udc47, ..., \ud835\udc61+ 1 do 8: no grad: \ud835\udc65\ud835\udc57\u22121 \u2190LDM\ud835\udc64\ud835\udc56{\ud835\udc65\ud835\udc57|(txt\ud835\udc56, ref_face\ud835\udc56)} 9: end for 10: with grad: \ud835\udc65\ud835\udc61\u22121 \u2190LDM\ud835\udc64\ud835\udc56{\ud835\udc65\ud835\udc61|(txt\ud835\udc56, ref_face\ud835\udc56)} 11: \ud835\udc65 \u2032 0 \u2190\ud835\udc65\ud835\udc61\u22121 // Predict the original latent by noise scheduler 12: img \u2032 \ud835\udc56\u2190VaeDec(\ud835\udc65 \u2032 0) // From latent to image 13: a \u2032 \ud835\udc56\u2190FaceDet(img \u2032 0) // Detect the face area in the denoised image 14: emb \u2032 \ud835\udc56, emb\ud835\udc56\u2190FaceEnc(a \u2032 \ud835\udc56), FaceEnc(ref_face\ud835\udc56) // Extract the embedding of generated face and reference face 15: L\ud835\udc56\ud835\udc51_\ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51\u2190L\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5a(emb \u2032 \ud835\udc56, emb\ud835\udc56) + L\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60(img \u2032 \ud835\udc56) // ID reward loss 16: \ud835\udc64\ud835\udc56+1 \u2190\ud835\udc64\ud835\udc56// Update Adapter\ud835\udc64\ud835\udc56 17: end for Finally, we use the weighted sum of these two reward objectives to fine-tune the diffusion for ID-preserving image generation: L\ud835\udc56\ud835\udc51_\ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51= \ud835\udefc1L\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5a+ \ud835\udefc2L\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60, (9) where \ud835\udefc1 and \ud835\udefc2 are the balancing coefficients. Our ID-Aligner is a universal method that can be applied to both the LoRA-based model and the Adapter-based model for IDpreserving generation, as described below in detail. ID-Aligner For Adapter Model. IP-Adapter is pluggable model for diffusion model, which enable a face image as identity control. We optimize this model with reward feedback learning, as shown in Fig.2(a). We follow the same spirits of ReFL [37] to utilize a tailor reward model to provide a special feedback signal on identity consistency. Specifically, given a reference image of a particular portrait and a textual control prompt, (\ud835\udc65ref 0 , \ud835\udc5d), we first iteratively denoise a randomly initialized latent without gradient until a random time step \ud835\udc47\ud835\udc51\u2208[\ud835\udc37\ud835\udc471, \ud835\udc37\ud835\udc472], yielding \ud835\udc65\ud835\udc47\ud835\udc51. Then, a further denoise step is executed with a gradient to obtain \ud835\udc65\ud835\udc47\ud835\udc51\u22121 and directly get the predicted denoised image \ud835\udc65\u2032 0 from \ud835\udc65\ud835\udc47\ud835\udc51\u22121. Afterward, a reward model is utilized to score on \ud835\udc65\u2032 0 and steer the model toward to the particular direction according to the reward model guidance. Here, we use weighted sum of similarity reward \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5aand aesthetic reward \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60to fetch loss \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51in Eq. 9 to optimize the model. The complete process is summarized in Algorithm 1. ID-Aligner For LoRA Model. LoRA is an efficient way to achieve identity-preserving generation. Given single or several reference images of the particular portrait, it quickly adapts the pre-trained LDM to the specified identity by solely fine-tuning some pluggable Algorithm 2 ID-Preserving Reward Feedback Learning for LoRA model 1: Dataset: Several personalized text-image pairs dataset D = {(txt1, img1), ...(txt\ud835\udc5b, img\ud835\udc5b)} 2: Input: LDM with LoRA parameters \ud835\udc640, face detection model FaceDet, encoder of a face recognition model FaceEnc. 3: Initialization: The number of noise scheduler time steps \ud835\udc47, add noise timestep \ud835\udc47\ud835\udc4e, denoising time step \ud835\udc61. 4: emb\ud835\udc5f\ud835\udc52\ud835\udc53\u2190Average(FaceEnc(FaceDet(img\ud835\udc56))), \ud835\udc56\u2208D // extract ID embeddings of personalized images. 5: for data point (txt\ud835\udc56, img\ud835\udc56) \u2208D do 6: \ud835\udc65\ud835\udc47\u2190RandNoise // Sample a Guassion noise. 7: \ud835\udc65\ud835\udc59// Add noise into the latent \ud835\udc650 according to Eq.1 // Denoising 8: with grad: \ud835\udc65\ud835\udc59\u22121 \u2190LDM\ud835\udc64\ud835\udc56{\ud835\udc65\ud835\udc59|(txt\ud835\udc56)} // ID-Reward Loop 9: \ud835\udc61\u2190Rand(\ud835\udc471, \ud835\udc472) // Pick a random denoise time step \ud835\udc61\u2208 [\ud835\udc471,\ud835\udc472] 10: for \ud835\udc57= \ud835\udc47, ..., \ud835\udc61+ 1 do 11: no grad: \ud835\udc65\ud835\udc57\u22121 \u2190LDM\ud835\udc64\ud835\udc56{\ud835\udc65\ud835\udc57|(txt\ud835\udc56)} 12: end for 13: with grad: \ud835\udc65\ud835\udc61\u22121 \u2190LDM\ud835\udc64\ud835\udc56{\ud835\udc65\ud835\udc61|(txt\ud835\udc56)} 14: \ud835\udc65 \u2032 0 \u2190\ud835\udc65\ud835\udc61\u22121 // Predict the original latent by noise scheduler 15: img \u2032 \ud835\udc56\u2190VaeDec(\ud835\udc65 \u2032 0) // From latent to image 16: a \u2032 \ud835\udc56\u2190FaceDet(img \u2032 0) // Detect the face area in the denoised image 17: emb \u2032 \ud835\udc56\u2190FaceEnc(a \u2032 \ud835\udc56) // Extract the embedding of generated face 18: L\ud835\udc56\ud835\udc51_\ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51\u2190L\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5a(emb \u2032 \ud835\udc56, emb\ud835\udc5f\ud835\udc52\ud835\udc53) + L\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60(img \u2032 \ud835\udc56) + L\ud835\udc5a\ud835\udc60\ud835\udc52( \ud835\udc65\ud835\udc59\u22121 , \ud835\udc650 ) // ID reward loss + denoising MSE loss 19: \ud835\udc64\ud835\udc56+1 \u2190\ud835\udc64\ud835\udc56// Update LoRA\ud835\udc64\ud835\udc56 20: end for extra low-rank parameters matrix of the network. However, fewshot learning in diffusion model to learn to generate a new person is highly depends on the provided dataset, which may require faces from different aspect or environment to avoid over-fitting. In this paper, we propose a more efficient way for ID LoRA training by applying the mentioned ID reward. As is shown in Fig.2(b), we train the LoRA with weighted sum of a denoising loss \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc59\ud835\udc5c\ud835\udc5f\ud835\udc4ein Eq.1 and ID-reward loss \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc5f\ud835\udc52\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc51in Eq.9. The \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc59\ud835\udc5c\ud835\udc5f\ud835\udc4eenables the model to learn the face structure while the \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc60\ud835\udc56\ud835\udc5aguide the model to learn identity information. The extra \ud835\udc3f\ud835\udc56\ud835\udc51_\ud835\udc4e\ud835\udc52\ud835\udc60is applied for improving the overall aesthetic of images. The complete process is summarized in Algorithm 2, which is slightly difference from the adapter one in terms of loss design. 4 EXPERIMENTS Traning Dataset: We carefully curated a portrait dataset specifically for ID-preserving generation training. Specifically, we employed the MTCNN face detector [41] to filter the image from the LAION dataset [28]. This process finally resulted in over 200,000 images that contained faces. These images were used for both LoRA and Adapter training. For adapter-fashion training, we cropped the face from each image and used it as the reference identity. ACM MM, 2024, Melbourne, Australia Chen and Zhang, et al. FastComposer IP-Adapter Ours \"a * reading a book\" \"a * walking the dog\" \"a * in the jungle\" IP-Adapter PhotoMaker InstantID Ours \"a * with a city in the background\" \"a * wearing a santa hat\" \"a * holding a glass of wine\" (a) SD15 (b) SDXL Figure 4: Visual comparison of different Adapter-based identity conditional generation methods based on SD15 and SDXL. To enhance the model\u2019s generalization capability, we further collect a high-quality prompt set from JourneyDB [19] for identityconditioned generation. To make sure the prompt is compatible with the ID-preserving generation task, we manually filter the prompt containing the referring word about the identity summarized by chatGPT[16], such as \u2019the girl\u2019, \u2019the man\u2019, which finally yielded a final set of prompts describing human. Training & Inference Details: For Adapter model, we take stable-diffusion-v1-5 [24] and SDXL[20] as the base text-toimage generation models, and take the widely recognized IP-Adapter [8] as the baseline model. We initialize the adapter weights with the pre-trained weight of IP-Adapter-faceid_plusv2 [8]. During training, only the adapter parameters are updated, ensuring compatibility with other models of the same structure. The model is trained using the 512x512 (1024x1024 for SDXL) resolution image with a batch size of 32. The learning rate is 10\u22126, and totally trained for 10,000 iterations. Following the practice of [37], the guidance scale is set to 1.0 during feedback learning. The \ud835\udefc1 is set as 0.2 and \ud835\udefc2 is set as 0.001. As for LoRA model, we collect 5 images for each identity. We use bucket adaptive resolution during LoRA training with a batch size of 1. The learning rate is 5 \u221710\u22125 for LoRA layers of UNet and 1 \u221710\u22124 for LoRA layers of Text encoder. The LoRA training is based on stable-diffusion-v1-5 [24] and SDXL [20] and totally trained for 2,000 iterations. For both LoRA and Adapter training, we exploit FaceNet [27] as the face detection model and MTCNN [41] face recognition model for the face embedding extraction. During inference, the DDIM scheduler [30] is employed, sampling 20 steps for generation. The guidance scale is set to 7.0, and the fusion strength of the adapter module is fixed at 1.0. Evaluation settings: We evaluate the identity-preserving generation ability of our method with the prompt from the validation set of FastComposer[36]. These prompts encompass four distinct types, namely action behavior, style words, clothing accessories, and environment. These diverse prompts facilitate a comprehensive evaluation of the model\u2019s capacity to retain the identity throughout the generation process. For the LoRA model, we collect 5 images for 6 characters ranging from black, white, and yellow skin. Separate LoRA is trained for each character, and the performance is evaluated individually. In the case of the adapter model, we carefully gather an image collection of about 20 portraits from various sources including existing face datasets and the internet. This collection represents a rich spectrum of identities, spanning various genders such as men and women, ages encompassing both adults and children, and diverse skin tones including black, white, and yellow. These images serve as conditional reference images during the evaluation process. Following [11], we report the face similarity score (similarity between the generated face and the reference face), DINO score (similarity between the perceptual representation of the generated image and the reference image), CLIP-I (semantic similarity of generated image and reference images), and CLIP-T (semantic similarity of text prompt and the generated images) to evaluate the performance of our method. 4.1 Experimental results 4.1.1 Qualitative Comparison. We conduct qualitative experiment of ID-Aligner for IP-Adapter and Lora model. Adapter Model: We compare our model\u2019s performance with baseline methods and other state-of-the-art adapter-based models. As illustrated in Figure 4, we conduct experiments on both the SD15 and SDXL models. In Figure 4(a), which showcases the results for the SD15-based model, our method demonstrates superior identity ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning ACM MM, 2024, Melbourne, Australia Table 1: Quantitative comparison between the state-of-the-art methods. The best results are highlighted in bold, while the second-best results results are underlined. Architecture Model Face Sim.\u2191 DINO\u2191 CLIP-I\u2191 LAION-Aes\u2191 CLIP-T\u2191 SD1.5 FastComposer 0.486 0.498 0.616 5.44 24.0 IP-Adapter 0.739 0.586 0.684 5.54 22.0 Ours 0.800 0.606 0.727 5.59 20.6 SDXL IP-Adapter 0.512 0.460 0.541 5.85 24.5 InstantID 0.783 0.445 0.606 5.58 22.8 PhotoMaker 0.520 0.497 0.641 5.54 23.6 Ours 0.619 0.499 0.602 5.88 23.7 \"a * wearing pink glasses\" Lora \"a * in police outfit\" Lora + ID-Reward Lora Lora + ID-Reward \"a * baking cookies\" \"a * in the snow\" Figure 5: Visual results of LoRA ID-Aligner methods based on SDXL. preservation and aesthetic quality. For instance, in the second example of \"a * walking the dog,\" both FastComposer and IP-Adapter fail to generate a reasonable image of a baby, resulting in lower image quality. The third example highlights our method\u2019s ability to better preserve the identity, aligning closely with the reference face. Regarding the SDXL-based model in 4(b), InstantID exhibits the best capability for identity-preserving generation. However, it has lower flexibility due to the face ControlNet, where the generated images heavily rely on the input control map. For example, in the case of \"a * holding a glass of wine,\" InstantID only generates an avatar photo, while other methods can produce half-body images without the constraint of the face structure control map. We show competitive face similarity with it. Meanwhile, our method have better aesthetic that any other methods, the clarity and aesthetic appeal is better than other, for example, the color of the second case and the concrete structure of the third case. LoRA Model: Fig.5 showcase the results of incorporating our method into the LoRA model. It is evident that our method significantly boosts identity consistency (the male character case) and visual appeal (the female character case) compared with the naive LoRA method. 4.1.2 Quantitative Comparison. Tab.1 presents a quantitative comparison of our proposed method with several state-of-the-art techniques for identity-preserving image generation, evaluated across various metrics. The methods are categorized based on the underlying architecture, with results reported separately for the SD1.5 and SDXL models. For the SD1.5 model, our method outperforms FastComposer and IP-Adapter in terms of Face Similarity (Face Sim.), DINO, and CLIP-I scores, indicating superior identity IP-Adapter IP-Adapter + ID-Cons IP-Adapter + ID-Cons + ID-Aes \"a * with a blue house in the background\u201c \"a * giving a lecture\" Figure 6: The effectiveness ablation of the proposed identity consistency reward (ID-Cons) and the identity aesthetic reward (ID-Aes). preservation consistency. Specifically, our approach achieves a Face Sim. score of 0.800, surpassing IP-Adapter\u2019s 0.739 and FastComposer\u2019s 0.486, suggesting better face identity preservation. Additionally, our higher DINO (0.606) and CLIP-I (0.727) scores demonstrate improved overall subject consistency. Our method also yields the highest LAION-Aesthetics (LAION-Aes) score of 5.59, indicating enhanced aesthetic quality compared to the baselines. Regarding the SDXL model, InstantID exhibits the highest Face Sim. score of 0.783, outperforming our method (0.619) and the other baselines in terms of face identity preservation. However, our approach achieves competitive performance on the DINO (0.499) and CLIP-I (0.602) metrics, suggesting comparable overall identity consistency. Notably, our method obtains the highest LAION-Aes score of 5.88 among all SDXL-based techniques, demonstrating its ability to generate aesthetically pleasing images while maintaining identity consistency. We also note that there is a slight performance drop in the semantic alignment between the prompt and the generated image after the optimization. This is because the model is forced to focus on identity adaptation, and will inevitably overlook the textual prompt to some extent. This phenomenon is also observed in lots of existing identity preservation generation works [11, 25, 32]. 4.1.3 Ablation Study. We conduct ablation study to analyze the effectiveness of each component in our method. ACM MM, 2024, Melbourne, Australia Chen and Zhang, et al. Table 2: Generalization study of our method on different base T2I models: Dreamshaper (SD1.5) and RealVisXL (SDXL). Model Face Sim.\u2191 DINO\u2191 CLIP-I\u2191 IP-Adapter-Dreamshaper 0.598 0.583 0.591 IP-Adapter-Dreamshaper + ID-Reward 0.662 (+10.7%) 0.588 (+0.8%) 0.616 (+4.2%) IP-Adapter-RealVisXL 0.519 0.488 0.575 IP-Adapter-RealVisXL + ID-Reward 0.635 (+22.3%) 0.509 (+4.3%) 0.623 (+8.3%) Figure 7: The illustration of the accelerated identity adaptation for the LoRA model. Left: LoRA trained based on SD1.5. Right: LoRA trained based on SDXL. Identity Reward: We conduct an ablation study to evaluate the impact of the proposed identity consistency reward and aesthetic reward. As illustrated in Fig.6, applying the identity consistency reward boosts the identity similarity significantly. For example, both the two cases generated by the baseline model encounters severe identity loss with a notably different appearance from the reference portrait. However, after optimizing the model using the identity consistency reward, the generated character exhibits a more similar outlook. This improvement can be attributed to the specialized identity reward provided by the face detection and face embedding extraction model, which guides the model to preserve the desired identity features as much as possible. Furthermore, the incorporation of the identity aesthetic reward further enhances the visual appeal of the generated image, particularly in improving the structural aspects of the characters. For example, in the first row, despite the preservation of identity features achieved through the identity consistency reward, the generated hands of the character still exhibit distortion. However, this issue is effectively resolved by the identity aesthetic reward, which benefits from tailor-curated feedback data. These ablation results underscore the crucial role played by our proposed identity consistency and aesthetic rewards in achieving high-quality, identity-preserving image generation. Fast Identity Adaptation: The LoRA approach is a test-time finetuning method and needs to train a separate LoRA model for each portrait. This poses a significant challenge for the application as it requires enough training time to ensure adequate identity adaptation. Thanks to the targeted feedback on identity consistency, we found our method can accelerate the identity adaptation of LoRA training significantly as demonstrated in Fig.7. This effect is particularly prominent when adapting to the SDXL, as conventional LoRA adaptation for SDXL is inherently slow due to the larger number of parameters required to update. In contrast, the id-aligner considerably reduces the fine-tuning time to achieve the same level of face similarity. Figure 8: User preferences on Text fidelity, Image quality, Face Similarity for different methods. We visualize the proportion of total votes that each method has received. Generalization Study: To demonstrate the generalization ability of our approach, We utilized the widely recognized Dreamshaper1 and RealVisXL2 for the open-sourced alternatives of SD15 and SDXL, and validate our method with these text-to-image models. According to the results of Tab.2, our method delivers a consistent performance boost on these alternative base models. Specifically, our method brings 10.7% and 4.2% performance boosts with Dreamshaper in terms of face similarity and image similarity measured by CLIP, which means better identity preservation. Moreover, our method obtained more significant performance improvement with the more powerful model in SDXL architecture. For instance, our method surpasses the original RealVisXL model with 22.3% in identity preservation, and 8.3% improvements in CLIP-I. This demonstrates the superior generalization ability of our method on different text-to-image models. 4.1.4 User Study. To gain a comprehensive understanding, we conducted a user study to compare our method with IP-adapterplusv2 [39], PhotoMaker [11], and InstantID [32]. We presented 50 generated text-image pairs and a reference face image to each user. For each set, users were asked to vote for the best one or two choices among the four methods, based on three criteria: (1) textfidelity which image best matches the given prompt, (2) Image Quality which image looks the most visually appealing, and (3) Face similarity which image most closely resembles the reference face. Users could choose two options if it was difficult to select a clear winner. We collected a total of 500 votes from 10 users. As shown in Fig. 8, the results align with the quantitative study in Fig. 4. InstantID achieved the highest face similarity score, while our method secured the second-best face similarity score. Our method obtained the highest aesthetic score and the second-highest textimage consistency score. Overall, our method performed well across all indicators and exhibited a relatively balanced performance compared to other methods. 1https://huggingface.co/Lykon/DreamShaper 2https://huggingface.co/SG161222/RealVisXL_V3.0 ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning ACM MM, 2024, Melbourne, Australia 5 CONCLUSION In this paper, we introduced ID-Aligner, an algorithm crafted to optimize image generation models for identity fidelity and aesthetics through reward feedback learning. We introduces two key rewards: identity consistency reward and identity aesthetic reward, which can be seamlessly integrated with adapter-based and LoRA-based text-to-image models, consistently improving identity consistency and producing aesthetically pleasing results. Experimental results validate the effectiveness of ID-Aligner, demonstrating its superior performance. ACM MM, 2024, Melbourne, Australia Chen and Zhang, et al."
}
]
}