diff --git "a/intro_28K/test_introduction_long_2405.04534v1.json" "b/intro_28K/test_introduction_long_2405.04534v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.04534v1.json" @@ -0,0 +1,100 @@ +{ + "url": "http://arxiv.org/abs/2405.04534v1", + "title": "Tactile-Augmented Radiance Fields", + "abstract": "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF", + "authors": "Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "As humans, our ability to perceive the world relies crucially on cross-modal associations between sight and touch [19, 50]. Tactile sensing provides a detailed understanding of material properties and microgeometry, such as the intri- cate patterns of bumps on rough surfaces and the complex motions that soft objects make when they deform. This type of understanding, which largely eludes today\u2019s com- puter vision models, is a critical component of applica- tions that require reasoning about physical contact, such as robotic locomotion [3, 24, 31, 34, 37, 38] and manipula- tion [6, 7, 11, 42, 60], and methods that simulate the behav- ior of materials [4, 13, 40, 41]. In comparison to many other modalities, collecting tac- tile data is an expensive and tedious process, since it re- quires direct physical interaction with the environment. A recent line of work has addressed this problem by having humans or robots probe the environment with touch sensors (see Table 1). Early efforts have been focused on capturing the properties of only a few objects either in simulation [16, 17, 52] or in lab-controlled settings [6, 7, 18, 28, 35, 52, 63], which may not fully convey the diversity of tactile signals in natural environments. Other works have gone beyond a 1 arXiv:2405.04534v1 [cs.CV] 7 May 2024 Dataset Samples Aligned Scenario Source More Than a Feeling [7] 6.5k \u2715 Tabletop Robot Feeling of Success [6] 9.3k \u2715 Tabletop Robot VisGel [35] 12k \u2715 Tabletop Robot SSVTP [28] 4.6k \u2713 Tabletop Robot ObjectFolder 1.0 [16] \u2013 \u2713 Object Synthetic ObjectFolder 2.0 [17] \u2013 \u2713 Object Synthetic ObjectFolder Real [18] 3.7k \u2715 Object Robot Burka et al. [5] 1.1k \u2715 Sub-scene Human Touch and Go [56] 13.9k \u2715 Sub-scene Human YCB-Slide\u2217[52] - \u2713 Object Human Touching a NeRF [63] 1.2k \u2713 Object Robot TaRF (Ours) 19.3k \u2713 Full scene Human Table 1. Dataset comparison. We present the number of real visual-tactile pairs and whether such pairs are visually aligned, i.e., whether the visual image includes an occlusion-free view of the touched surface. \u2217YCB-Slide has real-world touch probes but synthetic images rendered with CAD models of YCB objects on a white background [9]. lab setting and have collected touch from real scenes [5, 56]. However, existing datasets lack aligned visual and tactile in- formation, since the touch sensor and the person (or robot) that holds it often occlude large portions of the visual scene (Fig. 2). These datasets also contain only a sparse set of touch signals for each scene, and it is not clear how the sam- pled touch signals relate to each other in 3D. In this work, we present a simple and low-cost procedure to capture quasi-dense, scene-level, and spatially-aligned visual and touch data (Fig. 1). We call the resulting scene representation a tactile-augmented radiance field (TaRF). We remove the need for robotic collection by leveraging a 3D scene representation (a NeRF [39]) to synthesize a view of the surface being touched, which results in spatially aligned visual-tactile data (Fig. 2). We collect this data by mounting a touch sensor to a camera with commonly avail- able materials (Fig. 3). To calibrate the pair of sensors, we take advantage of the fact that popular vision-based touch sensors [25, 26, 32, 48] are built on ordinary cameras. The relative pose between the vision and tactile sensors can thus be estimated using traditional methods from multi-view ge- ometry, such as camera resectioning [20]. We use this procedure to collect a large real-world dataset of aligned visual-tactile data. With this dataset, we train a diffusion model [45, 51] to estimate touch at loca- tions not directly probed by a sensor. In contrast to the re- cent work of Zhong et al. [63], which also estimates touch from 3D NeRF geometry, we create scene-scale reconstruc- tions, we do not require robotic proprioception, and we use diffusion models [51]. This enables us to obtain tactile data at a much larger scale, and with considerably more diver- sity. Unlike previous visual-tactile diffusion work [57], we condition the model on spatially aligned visual and depth information, enhancing the generated samples\u2019 quality and their usefulness in downstream applications. After training, the diffusion model can be used to predict tactile informa- OF 2.0 [17] VisGel [35] OF Real [18] SSVTP [28] TG [56] TaRF (Ours) Figure 2. Visual-tactile examples. In contrast to the visual-tactile data captured in previous work, our approach allows us to sample unobstructed images that are spatially aligned with the touch sig- nal, from arbitrary 3D viewpoints using a NeRF. tion for novel positions in the scene. Analogous to quasi- dense stereo methods [15, 33], the diffusion model effec- tively propagates sparse touch samples, obtained by prob- ing, to other visually and structurally similar regions of the scene. We evaluate our visual-tactile model\u2019s ability to accu- rately perform cross-modal translation using a variety of quality metrics. We also apply it to several downstream tasks, including localizing a touch within a scene and un- derstanding material properties of the touched area. Our experiments suggest: \u2022 Touch signals can be localized in 3D space by exploiting multi-view geometry constraints between sight and touch. \u2022 Estimated touch measurements from novel views are not only qualitatively accurate, but also beneficial on down- stream tasks. \u2022 Cross-modal prediction models can accurately estimate touch from sight for natural scenes. \u2022 Visually-acquired 3D scene geometry improves cross- modal prediction.", + "main_content": "Visual-tactile datasets. Previous work has either used simulators [16, 17] or robotic arms [6, 8, 18, 35, 63] for data generation. Our work is closely related to that of Zhong et al. [63], which uses a NeRF and captured touch data to generate a tactile field for several small objects. They use the proprioception of an expensive robot to spatially align vision and touch. In contrast, we leverage the properties of the tactile sensor and novel view synthesis to use commonly available material (a smartphone and a selfie stick) to align vision and touch. This enables the collection of a larger, scene-level, and more diverse dataset, on which we train a higher-capacity diffusion model (rather than a conditional GAN). Like several previous works [5, 56], we also collect scene-level data. In contrast to them, we spatially align the signals by registering them in a unified 3D representation, thereby increasing the prediction power of the visual-tactile generative model. Capturing multimodal 3D scenes. Our work is related to methods that capture 3D visual reconstructions of spaces 2 using RGB-D data [12, 49, 55, 59] and multimodal datasets of paired 3D vision and language [1, 2, 10]. Our work is also related to recent methods that localize objects in NeRFs using joint embeddings between images and language [29] or by semantic segmentation [62]. In contrast to language supervision, touch is tied to a precise position in a scene. 3D touch sensing. A variety of works have studied the close relationship between geometry and touch, motivating our use of geometry in imputing touch. Johnson et al. [25, 26] proposed vision-based touch sensing, and showed that highly accurate depth can be estimated from the touch sensor using photometric stereo. Other work has estimated object-scale 3D from touch [54]. By contrast, we combine sparse estimates of touch with quasi-dense tactile signals estimated using generative models. Cross-modal prediction of touch from sight. Recent work has trained generative models that predict touch from images. Li et al. [35] used a GAN to predict touch for images of a robotic arm, while Gao et al. [18] applied them to objects collected on a turntable. Yang et al. [57] used latent diffusion to predict touch from videos of humans touching objects. Our goal is different from these works: we want to predict touch signals that are spatially aligned with a visual signal, to exploit scene-specific information, and to use geometry. Thus, we use a different architecture and conditioning signal, and fit our model to examples from the same scenes at training and test time. Other work has learned joint embeddings between vision and touch [28, 36, 56, 58, 61]. 3. Method We collect visual and tactile examples from a scene and register them together with a 3D visual reconstruction to build a TaRF. Specifically, we capture a NeRF F\u03b8 : (x, r) 7\u2192(c, \u03c3) that maps a 3D point x = (x, y, z) and viewing direction r to its corresponding RGB color c and density \u03c3 [39]. We associate to the visual representation a touch model F\u03d5 : vt 7\u2192\u03c4 that generates the tactile signal that one would obtain by touching at the center of the image vt. In the following, we explain how to estimate F\u03b8 and F\u03d5 and put them into the same shared 3D space. 3.1. Capturing vision and touch signals Obtaining a visual 3D reconstruction. We build the visual NeRF, F\u03b8, closely following previous work [12, 55]. A human data collector moves through a scene and records a video, covering as much of the space as possible. We then estimate camera pose using structure from motion [47] and create a NeRF using off-the-shelf packages [53]. Additional details are provided in the supplement. Capturing and registering touch. We simultaneously collect tactile and visual signals by mounting a touch sensor Visual Camera Tactile Sensor Tactile frames Visual frames Visual-Tactile Correspondences Figure 3. Capturing setup. (a) We record paired vision and touch signals using a camera attached to a touch sensor. (b) We estimate the relative pose between the touch sensor and the camera using correspondences between sight and touch. on a camera (Fig. 3), obtaining synchronized touch signals {\u03c4 i}N i=1 and video frames v. We then estimate the pose of the video frames using off-the-shelf structure from motion methods [47], obtaining poses {pv i }N i=1. Finally, we use the calibration of the mount to obtain the poses {pt i}N i=1 of the tactile measurements with respect to the scene\u2019s global reference frame. As a collection device, we mount an iPhone 14 Pro to one end of a camera rod, and a DIGIT [32] touch sensor to the other end. Note that the devices can be replaced with any RGB-D camera and vision-based tactile sensor. Capturing setup calibration. To find the relative pose between the camera and the touch sensor (Fig. 3), we exploit the fact that arbitrary viewpoints can be synthesized from F\u03b8, and that ubiquitous vision-based touch sensors are based on perspective cameras. In these sensors, an elastomer gel is placed on the lens of a commodity camera, which is illuminated by colored lights. When the gel is pressed into an object, it deforms, and the camera records an image of the deformation; this image is used as the tactile signal. This design allows us to estimate the pose of the tactile sensor through multi-view constraints from visualtactile correspondences: pixels in visual images and tactile images that are of the same physical point. We start the calibration process by synthesizing novel views from F\u03b8. The views are generated at the camera location {pv i }N i=1, but rotated 90\u25e6on the x-axis. This is because the camera is approximately orthogonal to the touch sensor (see Fig. 3). Then, we manually annotate corresponding pixels between the touch measurements and the generated frames (Fig. 3). To simplify and standardize this process, we place a braille board in each scene and probe it with the touch sensor. This will generate a distinctive touch signal that is easy to localize [23]. We formulate the problem of estimating the six degrees of freedom relative pose (R, t) between the touch sensor and the generated frames as a resectioning problem [20]. We use the estimated 3D structure from the NeRF F\u03b8 to obtain 3D points {xi}M i=1 for each of the annotated corre3 spondences. Each point has a pixel position ui \u2208R2 in the touch measurement. We find (R, t) by minimizing the reprojection error: \\ min _ { { \\ma thbf R } , { \\ma t hbf t}} \\frac {1}{M}\\sum _{i=1}^M \\lVert \\pi ({\\mathbf K}[\\mathbf {R}\\,\\,|\\,\\,\\mathbf {t}], \\mathbf {X}_i) \\bu _i \\rVert _1, (1) where \u03c0 projects a 3D point using a given projection matrix, K are the known intrinsics of the tactile sensor\u2019s camera, and the point Xi is in the coordinate system of the generated vision frames. We perform the optimization on 6-15 annotated correspondences from the braille board. For robustness, we compute correspondences from multiple frames. We represent the rotation matrix using quaternions and optimize using nonlinear least-squares. Once we have (R, t) with respect to the generated frames, we can derive the relative pose between the camera and the touch sensor. 3.2. Imputing the missing touch We use a generative model to estimate the touch signal (represented as an image from a vision-based touch sensor) for other locations within the scene. Specifically, we train a diffusion model p\u03d5(\u03c4 | v, d, b), where v and d are images and depth maps extracted from F\u03b8 (see Fig. 4). We also pass as input to the diffusion model a background image captured by the touch sensor when it is not in contact with anything, denoted as b. Although not essential, we have observed that this additional input empirically improves the model\u2019s performance (e.g., Fig. 1 the background provides the location of defects in the gel, which appear as black dots). We train the model p\u03d5 on our entire vision-touch dataset (Sec. 4). The training of p\u03d5 is divided into two stages. In the first, we pre-train a cross-modal visual-tactile encoder with self-supervised contrastive learning on our dataset. This stage, initially proposed by [23, 57], is equivalent to the self-supervised encoding pre-training that is common for image generation models [45]. We use a ResNet-50 [21] as the backbone for this contrastive model. In the second stage, we use the contrastive model to generate the input for a conditional latent diffusion model, which is built upon Stable Diffusion [45]. A frozen pretrained VQ-GAN [14] is used to obtain the latent representation with a spatial dimension of 64 \u00d7 64. We start training the diffusion model from scratch and pre-train it on the task of unconditional tactile image generation on the YCBSlide dataset [52]. After this stage, we train the conditional generative model p\u03d5 on our spatially aligned visual-tactile dataset, further fine-tuning the contrastive model end-to-end with the generation task. At inference time, given a novel location in the 3D scene, we first render the visual signals \u02c6 v and \u02c6 d from NeRF, and then estimate the touch signal \u02c6 \u03c4 of the position using the diffusion model. Latent Diffusion Gaussian Noise \u001f\u001e\u001e\u001e\u001e\u001d\u001e\u001e\u001e\u001e\u001c Depth RGB Est. Touch NeRF { Figure 4. Touch estimation. We estimate the tactile signal for a given touch sensor pose (R, t). To do this, we synthesize a viewpoint from the NeRF, along with a depth map. We use conditional latent diffusion to predict the tactile signal from these inputs. 4. A 3D Visual-Tactile Dataset In the following, we show the details of the data collection process and statistics of our dataset. 4.1. Data Collection Procedure The data collection procedure is divided into two stages. First, we collect multiple views from the scene, capturing enough frames around the areas we plan to touch. During this stage, we collect approximately 500 frames. Next, we collect synchronized visual and touch data, maximizing the geometry and texture being touched. We then estimate the camera location of the vision frames collected in the previous two stages using off-the-shelf mapping tools [47]. After estimating the camera poses for the vision frames, the touch measurements\u2019 poses can be derived by using the mount calibration matrix. More details about the pose estimation procedure can be found in the supplement. Finally, we associate each touch sensor with a color image by translating the sensor poses upwards by 0.4 meters and querying the NeRF with such poses. The field of view we use when querying the NeRF is 50\u25e6. This provides us with approximately 1,500 temporally aligned vision-touch image pairs per scene. Note that this collection procedure is scalable since it does not require specific expertise or equipment and generates abundant scene-level samples. 4.2. Dataset Statistics We collect our data in 13 ordinary scenes including two offices, a workroom, a conference room, a corridor, a tabletop, a corridor, a lounge, a room with various clothes and four outdoor scenes with interesting materials. Typically, we collect 1k to 2k tactile probes in each scene, resulting in a total of 19.3k image pairs in the dataset. Some representative samples from the collected dataset are shown in Fig. 5. Our data includes a large variety of geometry (edges, surfaces, corners, etc.) and texture (plastic, clothes, snow, wood, etc.) of different materials in the scene. During capturing process, the collector will try to 4 Figure 5. Representative examples from the captured dataset. Our dataset is obtained from nine everyday scenes, such as offices, classrooms, and kitchens. We show three such scenes in the figure above, together with samples of spatially aligned visual and tactile data. In each scene, 1k to 2k tactile probes were collected, resulting in a total of 19.3k image pairs. The data encompasses diverse geometries (edges, surfaces, corners, etc.) and textures (plastic, clothes, snow, wood, etc.) of various materials. The collector systematically probed different objects, covering areas with distinct geometry and texture using different sensor poses. thoroughly probe various objects and cover the interesting areas with more distinguishable geometry and texture with different sensor poses. To the best of our knowledge, our dataset is the first dataset that captures full, scene-scale spatially aligned vision-touch image pairs. We provide more details about the dataset in the supplement. 5. Experiments Leveraging the spatially aligned image and touch pairs from our dataset, we first conduct experiments on dense touch estimation. We then show the effectiveness of both the aligned data pairs and the synthesized touch signals by conducting tactile localization and material classification as two downstream tasks. 5.1. Implementation Details NeRF. We use the Nerfacto method from Nerfstudio [53]. For each scene, we utilize approximately 2,000 images as training set, which thoroughly cover the scene from various view points. We train the network with a base learning rate of 1 \u00d7 10\u22122 using Adam [30] optimizer for 200,000 steps on a single NVIDIA RTX 2080 Ti GPU to achieve optimal performance. Visual-tactile contrastive model. Following prior works [27, 57], we leverage contrastive learning methods to train a ResNet-50 [21] as visual encoder. The visual and tactile encoders share the same architecture but have different weights. We encode visual and tactile data into latent vectors in the resulting shared representation space. We set the dimension of the latent vectors to 32. Similar to CLIP [43], the model is trained on InfoNCE loss obtained from the pairwise dot products of the latent vectors. We train the model for 20 epochs by Adam [30] optimizer with a learning rate of 10\u22124 and batch size of 256 on 4 NVIDIA RTX 2080 Ti GPUs. Visual-tactile generative model. Our implementation of the diffusion model closely follows Stable Diffusion [46], with the difference that we use a ResNet-50 to generate the visual encoding from RGB-D images for conditioning. Specifically, we also add the RGB-D images rendered from the tactile sensors\u2019 poses into the conditioning, which we refer to in Sec. 5.2 as multiscale conditioning. The model is optimized for 30 epochs by Adam [30] optimizer with a base learning rate of 10\u22125. The learning rate is scaled by gpu number \u00d7 batch size. We train the model with batch size of 48 on 4 NVIDIA A40 GPUs. At inference time, the model conducts 200 steps of denoising process with a 7.5 guidance scale. Following prior cross-modal synthesis work [44], we use reranking to improve the prediction quality. We obtain 16 samples from the diffusion model for every instance and re-rank the samples with our pretrained contrastive model. The sample with highest similarity is the final prediction. 5.2. Dense Touch Estimation Experimental setup. We now evaluate the diffusion model\u2019s ability to generate touch images. To reduce overlap between the training and test set, we first split the frames into sequences temporally (following previous work [56]). We split them into sequences of 50 touch samples, then divide these sequences into train/validation/test with a ratio of 8/1/1. We evaluate the generated samples on Frechet Inception Distance (FID), a standard evaluation metric for cross-modal generation [56]. We also include Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM), though we note that these metrics are highly sensitive to spatial position of the generated content, and can be optimized by models that minimize simple pixelwise losses [22]. We also include CVTP metric proposed by prior work [57], which measures the similarity between visual and tactile embeddings of a contrastive model, analogous to 5 edge Condition VisGel Condition G.T. Ours L1 Ours G.T. L1 VisGel brick rock chair sofa desk wall surface desk carpet Figure 6. Qualitative touch estimation results. Each model is conditioned on the RGB image and depth map rendered from the NeRF (left). The white box indicates the tactile sensor\u2019s approximate field of view (which is much smaller than the full conditional image). The G.T. column shows the ground truth touch images measured from a DIGIT sensor. L1 and VisGel often generate blurry textures and inaccurate geometry. By contrast, our model better captures the features of the tactile image, e.g., the rock\u2019s microgeometry and complex textures and shapes of furniture. The last row shows two failure cases of our model. In both examples, our model generates a touch image that is geometrically misaligned with the ground truth. All of the examples shown here are at least 10cm away from any training sample. CLIP [43] score. We compare against two baselines: VisGel, the approach from Li et. [35], which trains a GAN for touch generation, and L1, a model with the same architecture of VisGel but trained to minimize an L1 loss in pixel space. Results. As is shown in Table 2, our approach performs much better on the high-level metrics, with up to 4x lower FID and 80x higher CVTP. This indicates that our proposed diffusion model captures the distribution and characteristics of the real tactile data more effectively. On the low-level metrics (PSNR and SSIM), all methods are comparable. In particular, the L1 model slightly outperforms the other methods since the loss it is trained on is highly correlated with low-level, pixel-wise metrics. Fig. 6 qualitatively compares samples from the different models. Indeed, our generated samples exhibit enhanced details in micro-geometry of fabrics and richer textures, including snow, wood and carpeting. However, all methods fail on fine details that are barely visible in the image, such as the tree bark. Ablation study. We evaluate the importance of the main components of our proposed touch generation approach (Table 3). Removing the conditioning on the RGB image results in the most prominent performance drop. This is expected since RGB image uniquely determines the fineModel PSNR \u2191 SSIM \u2191 FID \u2193 CVTP \u2191 L1 24.34 0.82 97.05 0.01 VisGel [35] 23.66 0.81 130.22 0.03 Ours 22.84 0.72 28.97 0.80 Table 2. Quantitative results on touch estimation for novel views. While comparable on low-level metrics with the baselines, our approach captures the characteristics of the real tactile data more effectively, resulting in a lower FID score. grained details of a tactile image. Removing depth image or contrastive pretraining has small effect on CVTP but results in a drop on FID. Contrastive re-ranking largely improves CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. We also find that multiscale conditioning provide a small benefit on FID and CVTP. 5.3. Downstream Task I: Tactile Localization To help understand the quality of the captured TaRFs, we evaluate the performance of the contrastive model (used for conditioning our diffusion model) on the task of tactile localization. Given a tactile signal, our goal is to find the corresponding regions in a 2D image or in a 3D scene that are associated with it, i.e., we ask the question: what part of this image/scene feel like this? We perform the following 6 Query Heatmap Query Query Heatmap Heatmap Query Heatmap Figure 7. Tactile localization heatmaps. Given a tactile query image, the heatmap shows the image patches with a higher affinity to this tactile signal, as measured by a contrastive model trained on our dataset. We use a sliding window and compare each extracted patch with the touch signal. In each case, the center patch is the true position. Our model successfully captures the correlation between the two signals. This enables it to localize a variety of touch signals, including fine-grained geometry, e.g., a cable or a keyboard, various types of corners and edges, and large uniform regions, such as a clothing. This ability enables our diffusion model to effectively propagate sparse touch samples to other visually and structurally similar regions of the scene. Model variation PSNR \u2191SSIM \u2191FID \u2193CVTP \u2191 Full 22.84 0.72 28.97 0.80 No RGB conditioning 22.13 0.70 34.31 0.76 No depth conditioning 22.57 0.71 33.16 0.80 No contrastive pretraining 22.82 0.71 32.98 0.79 No re-ranking 22.92 0.72 29.46 0.61 No multiscale 23.19 0.72 30.89 0.77 Table 3. Ablation study. Since the fine-grained details of touch images can be determined from a RGB image, removing conditioning on the latter results in the largest performance drops. Reranking has notable impact on CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. evaluations on the test set of our dataset. Note that we run no task-specific training. 2D Localization. To determine which part of an image are associated with a given tactile measurement, we follow the same setup of SSVTP [28]. We first split the image into patches and compute their embedding. Then, we generate the tactile embedding of the input touch image. Finally, we compute the pairwise similarities between the tactile and visual embeddings, which we plot as a heatmap. As we can see in Fig. 7, our constrastive encoder can successfully capture the correlations between the visual and tactile data. For instance, the tactile embeddings of edges are associated to edges of similar shape in the visual image. Note that the majority of tactile embeddings are highly ambiguous: all edges with a similar geometry feel the same. 3D Localization. In 3D, the association of an image to tactile measurements becomes less ambiguous. Indeed, since tactile-visual samples are rotation-dependent, objects with similar shapes but different orientations will generate different tactile measurements. Lifting the task to 3D still does not remove all ambiguities (for example, each side of a rectangular table cannot be precisely localized). Nonetheless, we believe it to be a good fit for a quantitative evaluation since it\u2019s rare for two ambiguous parts of the scene to be touched with exactly the same orientation. We use the following experimental setup for 3D localization. Given a tactile image as a query, we compute its distance in embedding space to all visual test images from the same scene. Note that all test images are associated with a 3D location. We define as ground-truth correspondences all test images at a distance of at most r from the 3D location of the test sample. We vary r to account for local ambiguities. As typical in the retrieval literature, we benchmark the performance with metric mean Average Precision (mAP). We consider three baselines: (1) chance, which randomly selects corresponding samples; (2) real, which uses the contrastive model trained on our dataset; and (3) real + estimated, which trains the contrastive model on both dataset samples and a set of synthetic samples generated via the scenes\u2019 NeRF and our touch generation model. Specifically, we render a new image and corresponding touch by interpolating the position of two consecutive frames in the training dataset. This results in a training dataset for the contrastive model that is twice as large. 7 r(m) Dataset 0.001 0.005 0.01 0.05 0.1 Chance 3.55 6.82 10.25 18.26 21.33 Real 12.10 22.93 32.10 50.30 57.15 Real + Est. 14.92 26.69 36.17 53.62 60.61 Table 4. Quantitative results on 3D tactile localization. We evaluate using mean Average Precision (mAP) as a metric. Training the contrastive model on our dataset of visually aligned real samples together with estimated samples from new locations in the scene results in the highest performance. The results, presented in Table 4, demonstrate the performance benefit of employing both real and synthetic tactile pairs. Combining synthetic tactile images with the original pairs achieves highest performance on all distance thresholds. Overall, this indicates that touch measurements from novel views are not only qualitatively accurate, but also beneficial for this downstream task. 5.4. Downstream Task II: Material Classification We investigate the efficacy of our visual-tactile dataset for understanding material properties, focusing on the task of material classification. We follow the formulation by Yang et al. [56], which consists of three subtasks: (i) material classification, requiring the distinction of materials among 20 possible classes; (ii) softness classification, a binary problem dividing materials as either hard or soft; and (iii) hardness classification, which requires the classification of materials as either rough or smooth. We follow the same experimental procedure of [56]: we pretrain a contrastive model on a dataset and perform linear probing on the sub-tasks\u2019 training set. Our experiments only vary the pretraining dataset, leaving all architectural choices and hyperparameters the same. We compare against four baselines. A random classifier (chance); the ObjectFolder 2.0 dataset [17]; the VisGel dataset [35]; and the Touch and Go dataset [56]. Note that the touch sensor used in the test data (GelSight) differs from the one used in our dataset (DIGIT). Therefore, we use for pretraining a combination of our dataset and Touch and Go. To ensure a fair comparison, we also compare to the combination of each dataset and Touch and Go. The findings from this evaluation, as shown in Table 5, suggest that our data improves the effectiveness of the contrastive pretraining objective, even though our data is from a different distribution. Moreover, we find that adding estimated touch probes for pretraining results in a higher performance on all the three tasks, especially the smoothness classification. This indicates that not only does our dataset covers a wide range of materials but also our diffusion model captures the distinguishable and useful patterns of different materials. Dataset Material Hard/ Soft Rough/ Smooth Chance 18.6 66.1 56.3 ObjectFolder 2.0 [17] 36.2 72.0 69.0 VisGel [35] 39.1 69.4 70.4 Touch and Go [56] 54.7 77.3 79.4 + ObjectFolder 2.0 [17] 54.6 87.3 84.8 + VisGel [35] 53.1 86.7 83.6 + Ours\u2217(Real) 57.6 88.4 81.7 + Ours\u2217(Real + Estimated) 59.0 88.7 86.1 Table 5. Material classification. We show the downstream material recognition accuracy of models pre-trained on different datasets. The final rows show the performance when combining different datasets with Touch and Go [56]. \u2217The task-specific training and testing datasets for this task are collected with a GelSight sensor. We note that our data comes from a different distribution, since it is collected with a DIGIT sensor [32]. 6. Conclusion In this work, we present the TaRF, a scene representation that brings vision and touch into a shared 3D space. This representation enables the generation of touch probes for novel scene locations. To build this representation, we collect the largest dataset of spatially aligned vision and touch probes.We study the utility of both the representation and the dataset in a series of qualitative and quantitative experiments and on two downstream tasks: 3D touch localization and material recognition. Overall, our work makes the first step towards giving current scene representation techniques an understanding of not only how things look, but also how they feel. This capability could be critical in several applications ranging from robotics to the creation of virtual worlds that look and feel like the real world. Limitations. Since the touch sensor is based on a highly zoomed-in camera, small (centimeter-scale) errors in SfM or visual-tactile registration can lead to misalignments of several pixels between the views of the NeRF and the touch samples, which can be seen in our TaRFs. Another limitation of the proposed representation is the assumption that the scene\u2019s coarse-scale structure does not change when it is touched, an assumption that may be violated for some inelastic surfaces. Acknowledgements. We thank Jeongsoo Park, Ayush Shrivastava, Daniel Geng, Ziyang Chen, Zihao Wei, Zixuan Pan, Chao Feng, Chris Rockwell, Gaurav Kaul and the reviewers for the valuable discussion and feedback. This work was supported by an NSF CAREER Award #2339071, a Sony Research Award, the DARPA Machine Common Sense program, and ONR MURI award N00014-21-1-2801. 8", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.15677v2", + "title": "CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models", + "abstract": "Recent advances in text-to-image models have opened new frontiers in\nhuman-centric generation. However, these models cannot be directly employed to\ngenerate images with consistent newly coined identities. In this work, we\npropose CharacterFactory, a framework that allows sampling new characters with\nconsistent identities in the latent space of GANs for diffusion models. More\nspecifically, we consider the word embeddings of celeb names as ground truths\nfor the identity-consistent generation task and train a GAN model to learn the\nmapping from a latent space to the celeb embedding space. In addition, we\ndesign a context-consistent loss to ensure that the generated identity\nembeddings can produce identity-consistent images in various contexts.\nRemarkably, the whole model only takes 10 minutes for training, and can sample\ninfinite characters end-to-end during inference. Extensive experiments\ndemonstrate excellent performance of the proposed CharacterFactory on character\ncreation in terms of identity consistency and editability. Furthermore, the\ngenerated characters can be seamlessly combined with the off-the-shelf\nimage/video/3D diffusion models. We believe that the proposed CharacterFactory\nis an important step for identity-consistent character generation. Project page\nis available at: https://qinghew.github.io/CharacterFactory/.", + "authors": "Qinghe Wang, Baolu Li, Xiaomin Li, Bing Cao, Liqian Ma, Huchuan Lu, Xu Jia", + "published": "2024-04-24", + "updated": "2024-04-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "In the evolving realm of text-to-image generation, diffusion models have emerged as indispensable tools for content creation [5, 26, 44]. However, the inherent stochastic nature of the generation models leads to the inability to generate consistent subjects in different contexts directly, as shown in Figure 1. Such consistency can derive many applications: illustrating books and stories, creating brand ambassador, movie making, developing presentations, art design, identity-consistent data construction and more. Subject-driven methods work by either representing a user- specific image as a new word [6, 18, 35] or learning image fea- ture injection [34, 38, 42] for consistent image generation. Their training paradigms typically include per-subject optimization and encoder pretraining on large-scale datasets. The former usually requires lengthy optimization for each subject and tends to overfit the appearance in the input image [11, 27]. The latter consumes significant computational costs and struggles in stably capturing the identity and its details [18, 34]. However, these methods attempt to produce images with the same identity as the reference images, instead of creating a new character in various contexts. A feasible way is that a text-to-image model is used in advance to create a new character\u2019s image and then subject-driven methods are adopted to produce images with consistent identity. Such a two-stage work- flow could push the pretrained generation model away from its training distribution, leading to degraded generation quality and poor compatibility with other extension models. Therefore, there is a pressing need to propose a new end-to-end framework that enables consistent character generation. Here we are particularly interested in consistent image gener- ation for human. Since text-to-image models are pretrained on large-scale image-text data, which contains massive text prompts with celeb names, the models can generate identity-consistent im- ages using celeb names. These names are ideal examples for this task. Previous work [35] has revealed that the word embeddings of celeb names constitute a human-centric prior space with editability, so we decide to conduct new character sampling in this space. In this work, we propose CharacterFactory, a framework for new character creation which mainly consists of an Identity-Embedding GAN (IDE-GAN) and a context-consistent loss. Specifically, a GAN model composed of MLPs is used to map from a latent space to the celeb embedding space following the adversarial learning manner, with word embeddings of celeb names as real data and generated ones as fake. Furthermore, to enable the generated embeddings to work like the native word embeddings of CLIP [24], we constrain these embeddings to exhibit consistency when combined with di- verse contexts. Following this paradigm, the generated embeddings could be naturally inserted into CLIP text encoder, hence could be seamlessly integrated with the image/video/3D diffusion models. In addition, since IDE-GAN is composed of only MLPs as trainable parameters and accesses only the pretrained CLIP during training, it takes only 10 minutes to train and then infinite new identity em- beddings could be sampled to produce identity-consistent images for new characters during inference. The main contributions of this work are summarized as follows: 1) We for the first time propose an end-to-end identity-consistent generation framework named CharacterFactory, which is empow- ered by a vector-wise GAN model in CLIP embedding space. 2) We design a context-consistent loss to ensure that the generated pseudo identity embeddings can manifest contextual consistency. This plug-and-play regularization can contribute to other related tasks. 3) Extensive experiments demonstrate superior identity con- sistency and editability of our method. In addition, we show the satisfactory interpolation property and strong generalization ability with the off-the-shelf image/video/3D modules.", + "main_content": "Recent advances in diffusion models [13, 31] have shown unprecedented capabilities for text-to-image generation [21, 25, 26], and new possibilities are still emerging [4, 36]. The amazing generation performance is derived from the high-quality large-scale image-text pairs [29, 30], flourishing foundational models [5, 23], and stronger controllability design [45, 46]. Their fundamental principles are based on Denoising Diffusion Probabilistic Models (DDPMs) [13], which include a forward noising process and a reverse denoising process. The forward process adds Gaussian noise progressively to an input image, and the reverse process is modeled with a UNet trained for predicting noise. Supervised by the denoising loss, a random noise can be denoised to a realistic image by iterating the reverse diffusion process. However, due to the stochastic nature of this generation process, existing text-to-image diffusion models are not able to directly implement consistent character generation. 2.2 Consistent Character Generation Existing works on consistent character generation mainly focus on personalization for the target subject [11, 27]. Textual Inversion [11] represents the target subject as a new word embedding via optimization while freezing the diffusion model. DreamBooth [27] finetunes all weights of the diffusion model to fit only the target subject. IP-Adapter [42] designs a decoupled cross-attention mechanism for text features and image features. Celeb-Basis [43] and StableIdentity [35] use prior information from celeb names to make optimization easier and improve editability. PhotoMaker trains MLPs and the LoRA residuals of the attention layers to inject identity information [18]. But these methods attempt to produce identity-consistent images based on the reference images, instead of creating a new character. In addition, The Chosen One [1] clusters the generated images to obtain similar outputs for learning a customized model on a highly similar cluster by iterative optimization with personalized LoRA weights and word embeddings, MLPs(D) MLPs(G) Tom Cruise Will Smith Taylor Swift Angelina Jolie \u2026 Celeb Space Tokenizer & Embedding Layer Sample \ud835\udc631 \u2217 \ud835\udc632 \u2217 Real Embeddings \ud835\udc67\u2208\ud835\udc41(0, \ud835\udc3c) Fake Embeddings \ud835\udcdb\ud835\udc82\ud835\udc85\ud835\udc97 a photo of \ud835\udc601 \u2217 \ud835\udc602 \u2217 \ud835\udc601 \u2217 \ud835\udc602 \u2217 is playing the guitar In a room, \ud835\udc601 \u2217 \ud835\udc602 \u2217 opens a gift \u00b7\u00b7\u00b7 Prompt Template Tokenizer & Embedding Layer Text Transformer CLIP a photo of \u04a7 \ud835\udc631 \u2217 \u04a7 \ud835\udc632 \u2217 \u04a7 \ud835\udc631 \u2217 \u04a7 \ud835\udc632 \u2217 is playing the guitar In a room , \u04a7 \ud835\udc631 \u2217 \u04a7 \ud835\udc632 \u2217 opens \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \ud835\udcdb\ud835\udc84\ud835\udc90\ud835\udc8f MLPs(G) \ud835\udc67\u2208\ud835\udc41(0, \ud835\udc3c) a photo of \ud835\udc631 \u2217 \ud835\udc632 \u2217 \u00b7\u00b7\u00b7 Tokenizer & Embedding Layer Prompts Text Transformer UNet Insert Insert \u00b7\u00b7\u00b7 (a) Training (b) Inference \u00b7\u00b7\u00b7 Contextual Embeddings Noise Add AdaIN Figure 2: Overview of the proposed CharacterFactory. (a) We take the word embeddings of celeb names as ground truths for identity-consistent generation and train a GAN model constructed by MLPs to learn the mapping from \ud835\udc67to celeb embedding space. In addition, a context-consistent loss is designed to ensure that the generated pseudo identity can exhibit consistency in various contexts. \ud835\udc60\u2217 1, \ud835\udc60\u2217 2 are placeholders for \ud835\udc63\u2217 1, \ud835\udc63\u2217 2. (b) Without diffusion models involved in training, IDE-GAN can end-to-end generate embeddings that can be seamlessly inserted into diffusion models to achieve identity-consistent generation. which is a time-consuming process. ConsiStory [32] introduces a shared attention block mechanism and correspondence-based feature injection between a batch of images, but relying only on patch features lacks semantic understanding for the subject and makes the inference process complicated. Despite creating new characters, they still suffer from complicated pipelines and poor editability. 2.3 Integrating Diffusion Models and GANs Generative Adversarial Net (GAN) [12, 16] models the mapping between data distributions by adversarially training a generator and a discriminator. Although GAN-based methods have been outperformed by powerful diffusion models for image generation, they perform well on small-scale datasets [8] benefiting from the flexibility of GANs. Some methods focus on combining them to improve the optimization objective for diffusion models with GANs [37, 40, 41]. In this work, we for the first time construct a GAN model in CLIP embedding space to sample consistent identity for diffusion models. 3 METHOD To enable the text-to-image models to directly generate images with the same identity, we present a new end-to-end framework, named CharacterFactory, which produces pseudo identity embeddings that can be inserted into any contexts to achieve identity-consistent character generation, as shown in Figure 2. In this section, the background of Stable Diffusion is first briefly introduced in Section 3.1. Later, the technical details of the proposed CharacterFactory are elaborated in Section 3.2 and 3.3. Finally, our full objective is demonstrated in Section 3.4. 3.1 Preliminary In this work, we employ the pretrained Stable Diffusion [26] (denoted as SD) as the base text-to-image model. SD consists of three components: a CLIP text encoder \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61[24], a Variational Autoencoder (VAE) (E, D) [9] and a denoising U-Net \ud835\udf16\ud835\udf03. With the text conditioning, \ud835\udf16\ud835\udf03can denoise sampled Gaussian noises to realistic images conforming to the given text prompts \ud835\udc5d. In particular, the tokenizer of \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61sequentially divides and encodes \ud835\udc5dinto \ud835\udc59integer tokens. Subsequently, by looking up the tokenizer\u2019s dictionary, the embedding layer of \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61retrieves a group of corresponding word embeddings \ud835\udc54= [\ud835\udc631, ..., \ud835\udc63\ud835\udc59], \ud835\udc63\ud835\udc56\u2208R\ud835\udc51. Then, the text transformer \ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61of \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61further represents \ud835\udc54to contextual embeddings \u00af \ud835\udc54= [\u00af \ud835\udc631, ..., \u00af \ud835\udc63\ud835\udc59], \u00af \ud835\udc63\ud835\udc56\u2208R\ud835\udc51with the cross-attention mechanism. And \ud835\udf16\ud835\udf03renders the content conveyed in text prompts by cross attention between \u00af \ud835\udc54and diffusion features. 3.2 IDE-GAN Since Stable Diffusion is trained with numerous celeb photos and corresponding captions with celeb names, these names can be inserted into various contexts to generate identity-aligned images. We believe that the word embeddings of these celeb names can be considered as ground truths for identity-consistent generation. Therefore, we train an Identity-Embedding GAN (IDE-GAN) model to learn a mapping from a latent space to the celeb embedding space, \ud835\udc3a: \ud835\udc67\u2192\ud835\udc63, with the expectation that it can generate pseudo identity embeddings that master the identity-consistent editability, like celeb embeddings. Specifically, we employ 326 celeb names [35] which consist only of first name and last name, and encode them into the corresponding word embeddings \ud835\udc36\u2208R326\u00d72\u00d7\ud835\udc51for training. In addition, we CharacterFactory (\u2112\ud835\udc4e\ud835\udc51\ud835\udc63+ \u2112\ud835\udc50\ud835\udc5c\ud835\udc5b) Only \u2112\ud835\udc50\ud835\udc5c\ud835\udc5b Only \u2112\ud835\udc4e\ud835\udc51\ud835\udc63 is smiling with red hair Varying random \ud835\udc67 Figure 3: Effect of L\ud835\udc4e\ud835\udc51\ud835\udc63and L\ud835\udc50\ud835\udc5c\ud835\udc5b. The images in each column are generated by a randomly sampled \ud835\udc67and two prompts according to the pipeline in Figure 2(b). The placeholders \ud835\udc60\u2217 1, \ud835\udc60\u2217 2 of prompts such as \u201c\ud835\udc60\u2217 1 \ud835\udc60\u2217 2 is smiling\u201d are omitted in this work for brevity (Zoom in for the best view). observe that adding a small noise to the celeb embeddings can still generate images with corresponding identity. Therefore, we empirically introduce random noise \ud835\udf02\u223cN (0, I) scaled by 5\ud835\udc52\u22123 as a data augmentation. As shown in Figure 2(a), given a latent code \ud835\udc67\u2208N (0, I), the generator \ud835\udc3ais trained to produce embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] that cannot be distinguished from \u201creal\u201d (i.e., celeb embeddings) by an adversarially trained discriminator \ud835\udc37. To alleviate the training difficulty of \ud835\udc3a, we use AdaIN to help the MLPs\u2019 output embeddings [\ud835\udc63\u2032 1, \ud835\udc63\u2032 2] land more naturally into the celeb embedding space [35]: \ud835\udc63\u2217 \ud835\udc56= \ud835\udf0e(\ud835\udc36\ud835\udc56)( \ud835\udc63\u2032 \ud835\udc56\u2212\ud835\udf07(\ud835\udc63\u2032 \ud835\udc56) \ud835\udf0e(\ud835\udc63\u2032 \ud835\udc56) ) + \ud835\udf07(\ud835\udc36\ud835\udc56), \ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc56= 1, 2 (1) where \ud835\udf07(\ud835\udc63\u2032 \ud835\udc56), \ud835\udf0e(\ud835\udc63\u2032 \ud835\udc56) are scalars. \ud835\udf07(\ud835\udc36\ud835\udc56) \u2208R\ud835\udc51, \ud835\udf0e(\ud835\udc36\ud835\udc56) \u2208R\ud835\udc51are vectors, because each dimension of \ud835\udc36\ud835\udc56has a different distribution. And \ud835\udc37is trained to detect the generated embeddings as \u201cfake\u201d. This adversarial training is supervised by: L\ud835\udc4e\ud835\udc51\ud835\udc63= E[\ud835\udc631,\ud835\udc632]\u223c\ud835\udc36[log \ud835\udc37([\ud835\udc631, \ud835\udc632] +\ud835\udf02)] +E[log(1\u2212\ud835\udc37(\ud835\udc3a(\ud835\udc67)))], (2) where \ud835\udc3atries to minimize this objective and \ud835\udc37tries to maximize it. As shown in the column 1 of Figure 3, [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] generated by \ud835\udc67can be inserted into different contextual prompts to produce human images while conforming to the given text descriptions. It indicates that [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] have obtained editability and enough information for human character generation, and flexibility to work with other words for editing, but the setting of \u201cOnly L\ud835\udc4e\ud835\udc51\ud835\udc63\u201d can not guarantee identity consistency in various contexts. 3.3 Context-Consistent Loss To enable the generated embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] to be naturally inserted into the pretrained Stable Diffusion, they are encouraged to work as similarly as possible to normal word embeddings. CLIP, which is trained to align images and texts, could map the word corresponding to a certain subject in various contexts to similar representations. Hence, we design the context-consistent loss to encourage the generated word embeddings to own the same property. Specifically, we sample 1,000 text prompts with ChatGPT [22] for various contexts (covering expressions, decorations, actions, attributes, and backgrounds), like \u201cUnder the tree,\ud835\udc60\u2217 1 \ud835\udc60\u2217 2 has a picnic\u201d, and demand that the position of \u201c\ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d in the context should be as diverse as possible. During training, we sample \ud835\udc41prompts from the collected prompt set, and use the tokenizer and embedding layer to encode them into \ud835\udc41groups of word embeddings. The generated embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] are inserted at the position of \u201c\ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d. Then, the text transformer \ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61further represents them to \ud835\udc41groups of contextual embeddings, where we expect to minimize the average pairwise distances among the {[\u00af \ud835\udc63\u2217 1, \u00af \ud835\udc63\u2217 2]\ud835\udc56}\ud835\udc41 \ud835\udc56=1: L\ud835\udc50\ud835\udc5c\ud835\udc5b= 1 \u0000\ud835\udc41 2 \u0001 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc57=1 \ud835\udc41 \u2211\ufe01 \ud835\udc58=\ud835\udc57+1 \u2225[\u00af \ud835\udc63\u2217 1, \u00af \ud835\udc63\u2217 2] \ud835\udc57\u2212[\u00af \ud835\udc63\u2217 1, \u00af \ud835\udc63\u2217 2]\ud835\udc58\u22252 2, (3) where \ud835\udc41is 8 as default. In this way, the pseudo word embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] generated by IDE-GAN can exhibit consistency in various contexts. A naive idea is to train MLPs with only L\ud835\udc50\ud835\udc5c\ud835\udc5b, which shows promising consistency as shown in the column 2, 3 of Figure 3. However, L\ud835\udc50\ud835\udc5c\ud835\udc5bonly focuses on consistency instead of diversity, mode collapse occurs in spite of different \ud835\udc67. When L\ud835\udc50\ud835\udc5c\ud835\udc5band L\ud835\udc4e\ud835\udc51\ud835\udc63work together, the proposed CharacterFactory can sample diverse context-consistent identities as shown in the column 4, 5 of Figure 3. Notably, this regularization loss is plug-and-play and can contribute to other subject-driven generation methods to learn context-consistent subject word embeddings. 3.4 Full Objective Our full objective can be expressed as: \ud835\udc3a\u2217= arg min \ud835\udc3amax \ud835\udc37\ud835\udf061L\ud835\udc4e\ud835\udc51\ud835\udc63(\ud835\udc3a, \ud835\udc37) + \ud835\udf062L\ud835\udc50\ud835\udc5c\ud835\udc5b(\ud835\udc3a,\ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61), (4) where \ud835\udf061 and \ud835\udf062 are trade-off parameters. The discriminator \ud835\udc37\u2019s job remains unchanged, and the generator \ud835\udc3ais tasked not only to learn the properties of celeb embeddings to deceive the \ud835\udc37, but also to manifest contextual consistency in the output space of the text transformer \ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61. Here, we emphasize two noteworthy points: \u2022 GAN for word embedding. We introduce GAN in the CLIP embedding space for the first time and leverage the subsequent network to design the context-consistent loss which can perceive the generated pseudo identity embeddings in diverse contexts. This design is similar to the thought of previous works for image generation [2, 15, 47], which have demonstrated that mixing the GAN objective and a more traditional loss such as L2 distance is beneficial. \u2022 No need diffusion-based training. Obviously, the denoising UNet and the diffusion loss which are commonly used to train diffusion-based methods, are not involved in our training process. Remarkably, the proposed IDE-GAN can seamlessly integrate with diffusion models to achieve identityconsistent generation for inference as shown in Figure 2(b). 4 EXPERIMENTS 4.1 Experimental Setting Implementation Details. We employ Stable Diffusion v2.1-base as our base model. The number of layers in the MLPs for the generator \ud835\udc3aand the discriminator \ud835\udc37are 2 and 3 respectively. The dimension of \ud835\udc67is set to 64 empirically. The batch size and learning rate are set to 1 and 5\ud835\udc52\u22125. We employ an Adam optimizer [17] with the momentum parameters \ud835\udefd1 = 0.5 and \ud835\udefd2 = 0.999 to optimize our Textual Inversion\u2020 DreamBooth\u2020 IP-Adapter\u2020 Celeb-Basis\u2020 CharacterFactory PhotoMaker\u2020 a photo of wearing headphones a photo of wearing a Christmas hat wearing a spacesuit Figure 4: Qualitative comparisons with two-stage workflows using five baselines (denoted with \u2020) for creating consistent characters. The upper left corner of the two-stage baselines is the generated image by Stable Diffusion as the input of the second stage. Two-stage workflows struggle to maintain the identity of the generated image and degrade the image quality. In comparison, the proposed CharacterFactory can generate high-quality identity-consistent character images with diverse layouts while conforming to the given text prompts (Zoom in for the best view). IDE-GAN. The trade-off parameters \ud835\udf061 and \ud835\udf062 are both 1 as default. CharacterFactory is trained with only 10 minutes for 10,000 steps on a single NVIDIA A100. The classifier-free guidance [14] scale is 8.5 for inference as default. More implementation details can be found in the supplementary material. Baselines. Since the most related methods, The Chosen One [1] and ConsiStory [32] which are also designed for consistent text-toimage generation, have not released their codes yet, we compare these methods with the content provided in their papers. In addition, as we introduced in Section 1, the two-stage workflows with subject-driven methods can also create new characters. Therefore, we first use a prompt \u201ca photo of a person, facing to the camera\u201d to drive Stable Diffusion to generate images of new characters as the input of the second stage, and then use these subject-driven methods to produce character images with diverse prompts for comparison. These input images are used for subject information injection and not involved in the calculation of quantitative comparisons. These methods include the optimization-based methods: Textual Inversion [11], DreamBooth [27], Celeb-Basis [43], and the encoderbased methods: IP-Adapter [42], PhotoMaker [18]. We prioritize to use the official models released by these methods. We use the Stable Diffusion 2.1 versions of Textual Inversion and DreamBooth for fair comparison. Evaluation. The input of our method comes from random noise, so this work does not compare subject preservation for quantitative comparison. To conduct a comprehensive evaluation, we use 40 text prompts that cover decorations, actions, expressions, attributes and Table 1: Quantitative comparisons with two-stage workflows using five baselines (denoted with \u2020). \u2191indicates higher is better, and \u2193indicates that lower is better. The best results are shown in bold. We define the speed as the time it takes to create a new consistent character on a single NVIDIA A100 GPU. Obviously, CharacterFactory obtains superior performance on identity consistency, editability, trusted face diversity, image quality and speed, which are consistent with the qualitative comparisons. Methods Subject Cons.\u2191 Identity Cons.\u2191 Editability\u2191 Face Div.\u2191 Trusted Div.\u2191 Image Quality\u2193 Speed (s)\u2193 Textual Inversion\u2020 [11] 0.647 0.295 0.274 0.392 0.078 47.94 3200 DreamBooth\u2020 [27] 0.681 0.443 0.287 0.339 0.073 62.66 1500 IP-Adapter\u2020 [42] 0.853 0.447 0.227 0.192 0.096 95.25 7 Celeb-Basis\u2020 [43] 0.667 0.369 0.273 0.378 0.101 56.43 480 PhotoMaker\u2020 [18] 0.694 0.451 0.301 0.331 0.138 53.37 10 CharacterFactory 0.764 0.498 0.332 0.333 0.140 22.58 3 drinking a beer giving a talk in a conference a watercolor painting of The Chosen One ConsiStory in a studio in a meadow eating piece of cake CharacterFactory CharacterFactory Figure 5: Qualitative comparisons with the generation results in the papers of two most related methods The Chosen One [1] and ConsiStory [32]. CharacterFactory achieves comparable performance with the same prompts (Zoom in for the best view). backgrounds [18]. Overall, we use 70 identities and 40 text prompts to generate 2,800 images for each competing method. Metrics: We calculate the CLIP visual similarity (CLIP-I) between the generated results of \u201ca photo of \ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d and other text prompts to evaluate Subject Consistency. And we calculate face similarity [7] and perceptual similarity (i.e., LPIPS) [48] between the detected face regions with the same settings to measure the Identity Consistency and Face Diversity [18, 39]. But inconsistent faces might obtain high face diversity, leading to unreliable results. Therefore, we also introduce the Trusted Face Diversity [35] which is calculated by the product of cosine distances from face similarity and face diversity between each pair of images, to evaluate whether the generated faces from the same identity are both consistent and diverse. We calculate the text-image similarity (CLIP-T) to measure the Editablity. In addition, we randomly sample 70 celeb names to generate images with the introduced 40 text prompts as pseudo ground truths, and calculate Fr\u00e9chet Inception Distance (FID) [20] between the generated images by competing methods and pseudo ground truths to measure the Image Quality. 4.2 Comparison with Two-Stage Workflows. Qualitative Comparison. As mentioned in Section 4.1, we randomly generate 70 character images in front view to inject identity information for two-stage workflows using subject-driven baselines (denoted with \u2020), as shown in Figure 4. PhotoMaker\u2020 [18] and Celeb-Basis\u2020 [43] are human-centric methods. The former pretrains a face encoder and LoRA residuals on large-scale datasets. The latter optimizes word embeddings to represent the target identity. But they all suffer from degraded image quality under this setting. IP-Adapter\u2020 [42] learns text-image decoupled cross attention, but fails to present \u201cChristmas hat\u201d and \u201cspacesuit\u201d. DreamBooth\u2020 [27] finetunes the whole model to adapt to the input image and tends to generate images similar to the input image. It lacks generation diversity and fails to produce the \u201cChristmas hat\u201d. Due to the stochasticity of Textual Inversion\u2020 [11]\u2019s optimization process, its identity consistency and image quality are relatively weak. Overall, two-stage workflows show decent performance for identity consistency, editability, and image quality, and they all rely on the input images and struggle to preserve the input identity. In contrast, the proposed CharacterFactory can sample pseudo identities end-toend and generate identity-consistent prompt-aligned results with high quality. Quantitative Comparison. In addition, we also provide the quantitative comparison with five baselines in Table 1. Since IP-Adapter\u2020 \ud835\udc671 \ud835\udc672 0.5\ud835\udc671 + 0.5\ud835\udc672 \u22ef \u22ef a photo of \ud835\udc601 \u2217 \ud835\udc602 \u2217 \ud835\udc601 \u2217 \ud835\udc602 \u2217 wearing headphones on a bus a photo of \ud835\udc601 \u2217 \ud835\udc602 \u2217 \ud835\udc601 \u2217 \ud835\udc602 \u2217holding a bottle of wine \ud835\udc671 \ud835\udc672 0.5\ud835\udc671 + 0.5\ud835\udc672 \u22ef \u22ef Figure 6: Interpolation property of IDE-GAN. We conduct linear interpolation between randomly sampled \ud835\udc671 and \ud835\udc672, and generate pseudo identity embeddings with IDE-GAN. To visualize the smooth variations in image space, we insert the generated embeddings into Stable Diffusion via the pipeline of Figure 2(b). The experiments in row 1, 3 are conducted with the same seeds, and row 2, 4 use random seeds (Zoom in for the best view). Table 2: Comparisons with two most related methods on the speed (i.e., time to produce consistent identity) and the forms of identity representation. In contrast, CharacterFactory is faster, and uses a more lightweight and natural form for identity representation, which ensures seamless collaboration with other modules and convenient identity reuse. Speed\u2193(s) Identity Representation The Chosen One [1] 1,200 LoRAs + two word embeddings Consistory [32] 49 Self-attention keys and values of reference images CharacterFactory 3 Two word embeddings tends to generate frontal faces, it obtains better subject consistency (CLIP-I) but weak editability (CLIP-T). CLIP-I mainly measures high-level semantic alignment and lacks the assessment for identity, so we further introduce the identity consistency for evaluation. Our method achieves the best identity consistency, editability and second-place subject consistency. In particular, the proposed context-consistent loss incentivizes pseudo identities to exhibit consistency in various contexts. On the other hand, our effective adversarial learning enables pseudo identity embeddings to work in Stable Diffusion as naturally as celeb embeddings, and thus outperforms PhotoMaker\u2020 (the second place) by 0.031 on editability. Textual Inversion\u2020 and Celeb-Basis\u2020 obtain good face diversity but weak trusted diversity. This is because face diversity measures whether the generated faces from the same identity are diverse in different contexts, but inconsistent identities can also be incorrectly recognized as \u201cdiverse\u201d. Therefore, trusted face diversity is introduced to evaluate whether the results are both consistent and diverse. So Textual Inversion\u2020 obtains the best face diversity, but is inferior to CharacterFactory 0.062 on trusted face diversity. For image quality (FID), the two-stage workflows directly lead to an unacceptable degradation of competing methods on image quality quantitatively. On the other hand, two-stage workflows consume more time for creating identity-consistent characters. In comparison, our end-to-end framework implements more natural generation results, the best image quality and faster inference workflow. 4.3 Comparison with Consistent-T2I Methods In addition, we compare the most related methods The Chosen One [1] and ConsiStory [32] with the content provided in their papers. These two methods are also designed for consistent character generation, but have not released the codes yet. Qualitative Comparison. As shown in Figure 5, The Chosen One uses Textual Inversion+DreamBooth-LoRA to fit the target identity, Table 3: Ablation study with Identity Consistency, Editability, Trusted Face Diversity and a proposed Identity Diversity. In addition, we also provide more parameter analysis in the supplementary material. Identity Cons. Editability Trusted Div. Identity Div. Only L\ud835\udc4e\ud835\udc51\ud835\udc63 0.078 0.299 0.013 0.965 Only L\ud835\udc50\ud835\udc5c\ud835\udc5b 0.198 0.276 0.057 0.741 Ours 0.498 0.332 0.140 0.940 but only achieves consistent face attributes, which fails to obtain better identity consistency. Besides, excessive additional parameters degrade the image quality. ConsiStory elicits consistency by using shared attention blocks to learn the subject patch features within a batch. Despite its consistent results, it lacks controllability and semantic understanding of the input subject due to its dependence on patch features, i.e., it cannot edit with abstract attributes such as age and fat/thin. In comparison, our method achieves comparable performance on identity consistency, and image quality, and even can prompt with abstract attributes as shown in Figure 1, 7. Practicality. As introduced in Section 2.2, The Chosen One searches a consistent character by a lengthy iterative procedure which takes about 1,200 seconds on a single NVIDIA A100 GPU, and needs to save LoRA weights+two word embeddings for each character. ConsiStory is training-free, but its inference pipeline is timeconsuming (takes about 49 seconds to produce an identity-consistent character) and requires saving self-attention keys and values of reference images for each character. In comparison, CharacterFactory is faster and more lightweight, taking only 10 minutes to train IDEGAN for sampling pseudo identity embeddings infinitely, and only takes 3 seconds to create a new character with Stable Diffusion. Besides, using two word embeddings to represent consistent identity is convenient for identity reuse and integration with other modules such as video/3D generation models. 4.4 Ablation Study In addition to the ablation results presented in Figure 3, we also conduct a more comprehensive quantitative analysis in Table 3. To evaluate the diversity of generated identities, we calculate the average pairwise face similarity between 70 generated images with \u201ca photo of \ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d, and define (1\u2212the average similarity) as identity diversity (The lower similarity between generated identities represents higher diversity). Note that identity diversity only makes sense when there is satisfactory identity consistency. As mentioned in Section 3.2, Only L\ud835\udc4e\ud835\udc51\ud835\udc63can generate promptaligned human images (0.299 on Editability), but the generated faces from the same latent code \ud835\udc67are different (0.078 on identity consistency). This is because learning the mapping \ud835\udc67\u2192\ud835\udc63with only L\ud835\udc4e\ud835\udc51\ud835\udc63 deceives the discriminator \ud835\udc37, but still struggles to perceive contextual consistency. Only L\ud835\udc50\ud835\udc5c\ud835\udc5bis prone to mode collapse, producing similar identities for different \ud835\udc67, which manifests as weaker identity diversity (0.741). Notably, identity consistency is not significant under this setting. We attribute to the fact that direct L2 loss cannot reach the abstract objective (i.e., identity consistency). When using \u201cThis is the story about Jenny. Jenny lived in a poor family when she was a child. So, she studied hard after going to school. At the age of 25, she found a job as a programmer. Now, she is successful in her career, enjoys coffee, and feels satisfied with her life in New York.\u201d Scene 1 Scene 2 Scene 3 Scene 4 Figure 7: Story Illustration. The proposed CharacterFactory can illustrate a story with the same character. L\ud835\udc4e\ud835\udc51\ud835\udc63and L\ud835\udc50\ud835\udc5c\ud835\udc5btogether, IDE-GAN can generate diverse contextconsistent pseudo identity embeddings, thereby achieving the best quantitative scores overall. 4.5 Interpolation Property of IDE-GAN The interpolation property of GANs is that interpolations between different randomly sampled latent codes in latent space can produce semantically smooth variations in image space [28]. To evaluate whether our IDE-GAN carries this property, we randomly sample \ud835\udc671 and \ud835\udc672, and perform linear interpolation as shown in Figure 6. IDEGAN uses the interpolated latent codes to generate corresponding pseudo identity embeddings, respectively. Since the output space of IDE-GAN is embeddings instead of images, it cannot directly visualize the variations like traditional GANs [16, 28] in image space. So we insert these pseudo identity embeddings into Stable Diffusion to generate the corresponding images via the pipeline in Figure 2(b). As shown in Figure 6, CharacterFactory can produce continuous identity variations with the interpolations between different latent codes. And the interpolated latent codes (e.g., 0.5\ud835\udc671 + 0.5\ud835\udc672) can be chosen for further identity-consistent generation. It demonstrates that our IDE-GAN has satisfactory interpolation property and can be seamlessly integrated with Stable Diffusion. 4.6 Applications As shown in Figure 1, 7, the proposed CharacterFactory can be used directly for various downstream tasks and is capable of broader extensions such as video/3D scenarios. Story Illustration. In Figure 7, a full story can be divided into a set of text prompts for different scenes. CharacterFactory can create a new character to produce identity-consistent story illustrations. Stratified Sampling. The proposed CharacterFactory can create diverse characters, such as different genders and races. Taking the gender as an example, we can categorize celeb names into \u201cMan\u201d and \u201cWoman\u201d to train Man-IDE-GAN and Woman-IDE-GAN separately, each of which can generate only the specified gender. Our generator \ud835\udc3ais constructed with only two-layer MLPs, so that stratified sampling will not introduce excessive storage costs. More details can be found in the supplementary material. Virtual Humans in Image/Video/3D Generation. Currently, virtual human generation mainly includes 2D/3D facial reconstruction, talking-head generation and body/human movements [50], which typically rely on pre-existing images and lack scenario diversity and editability. And CharacterFactory can create new characters end-to-end and conduct identity-consistent virtual human image generation. In addition, since the pretrained Stable Diffusion 2.1 is fixed and the generated pseudo identity embeddings can be inserted into CLIP text transformer naturally, our method can collaborate with the SD-based plug-and-play modules. As shown in Figure 1, we integrate CharacterFactory with ControlNet-OpenPose [3, 46], ModelScopeT2V [33] and LucidDreamer [19] to implement identityconsistent virtual human image/video/3D generation. Identity-Consistent Dateset Construction. Some human-centric subject-driven generation methods [6, 18] construct large-scale celeb datasets for training. PhotoMaker [18] crawls celeb photos from the Internet and DreamIdentity [6] uses text prompts containing celeb names to drive Stable Diffusion to generate celeb images. Their constructed data includes only celebs, leading to a limited number of identities. Notably, the proposed CharacterFactory can use diverse text prompts to generate identity-consistent images infinitely for dataset construction. Furthermore, collaboration with the mentioned SD-based plug-and-play modules can construct identity-consistent video/3D datasets. 5 CONCLUSION In this work, we propose CharacterFactory, to unlock the end-toend identity-consistent generation ability for diffusion models. It consists of an Identity-Embedding GAN (IDE-GAN) for learning the mapping from a latent space to the celeb embedding space and a context-consistent loss for identity consistency. It takes only 10 minutes for training and 3 seconds for end-to-end inference. Extensive quantitative and qualitative experiments demonstrate the superiority of CharacterFactory. Besides, we also present that our method can empower many interesting applications." + }, + { + "url": "http://arxiv.org/abs/2404.07178v1", + "title": "Move Anything with Layered Scene Diffusion", + "abstract": "Diffusion models generate images with an unprecedented level of quality, but\nhow can we freely rearrange image layouts? Recent works generate controllable\nscenes via learning spatially disentangled latent codes, but these methods do\nnot apply to diffusion models due to their fixed forward process. In this work,\nwe propose SceneDiffusion to optimize a layered scene representation during the\ndiffusion sampling process. Our key insight is that spatial disentanglement can\nbe obtained by jointly denoising scene renderings at different spatial layouts.\nOur generated scenes support a wide range of spatial editing operations,\nincluding moving, resizing, cloning, and layer-wise appearance editing\noperations, including object restyling and replacing. Moreover, a scene can be\ngenerated conditioned on a reference image, thus enabling object moving for\nin-the-wild images. Notably, this approach is training-free, compatible with\ngeneral text-to-image diffusion models, and responsive in less than a second.", + "authors": "Jiawei Ren, Mengmeng Xu, Jui-Chieh Wu, Ziwei Liu, Tao Xiang, Antoine Toisoul", + "published": "2024-04-10", + "updated": "2024-04-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Controllable scene generation, i.e., the task of generating images with rearrangeable layouts, is an important topic of generative modeling [31, 51] with applications ranging from content generation and editing for social media plat- forms to interactive interior design and video games. In the GAN era, latent spaces have been designed to of- fer a mid-level control on generated scenes [9, 30, 48, 49]. Such latent spaces are optimized to provide a disentangle- ment between scene layout and appearance in an unsuper- vised manner. For instance, BlobGAN [9] uses a group of splattering blobs for 2D layout control, and GIRAFFE [30] uses compositional neural fields for 3D layout control. Al- though these methods provide good control of the scene layout, they remain limited in the quality of the gener- ated images. On the other hand, diffusion models have recently shown unprecedented performance at the text-to- image (T2I) generation task [5, 8, 15, 36, 39, 42]. Still, they cannot provide fine-grained spatial control due to the lack of mid-level representations stemming from their fixed for- ward noising process [15, 42]. In this work, we propose a framework to bridge this gap and allow for controllable scene generation with a gen- eral pretrained T2I diffusion model. Our method, enti- tled SceneDiffusion, is based on the core observation that spatial-content disentanglement can be obtained during the diffusion sampling process by denoising multiple scene lay- outs at each denoising step. More specifically, at each diffu- sion step t, we optimize a scene representation by first ran- domly sampling several scene layouts, running locally con- ditioned denoising on each layout in parallel, and then ana- lytically optimizing the representation for the next diffusion step t\u22121 to minimize its distance with each of denoised re- sult. We employ a layered scene representation [17, 18, 22], where each layer represents an object with its shape con- trolled by a mask and its content controlled by a text de- scription, allowing us to compute object occlusions using depth ordering. Rendering of the layered representation is done by running a short schedule of image diffusion, which is usually completed within a second. Overall, SceneDiffu- sion generates rearrangable scenes without requiring fine- tuning on paired data [28, 52], mask-specific training [36], or test-time optimization [34, 47], and is agnostic to de- noiser architecture designs. In addition, to enable in-the-wild image editing, we pro- pose to use the sampling trajectory of the reference image as an anchor in SceneDiffusion. When denoising multiple lay- outs simultaneously, we increase the weight of the reference layout in the noise update to keep the scene\u2019s faithfulness to the reference content. By disentangling the spatial location and visual appearance of the contents, our approach bet- ter reduces hallucinations and preserves the overall content across different editing compared to baselines [10, 23, 27]. To quantify the performance, we build an evaluation benchmark by creating a dataset containing 1,000 text prompts and over 5,000 images associated with image cap- tions, local descriptions, and mask annotations. We evalu- ate our proposed approach on this dataset and show that it outperforms prior works on both image quality and layout consistency metrics by a clear margin on both controllable scene generation and image spatial editing tasks. In summary, our contributions are: \u2022 We propose a novel sampling strategy, SceneDiffusion, to generate layered scenes with image diffusion models. \u2022 We show that the layered scene representation supports flexible layout rearrangements, enabling interactive scene manipulation and in-the-wild image editing. \u2022 We build an evaluation benchmark and observe that our method achieves state-of-the-art performance quantita- tively on both scene generation and image editing tasks.", + "main_content": "2.1. Controllable Scene Generation Generating controllable scenes has been an important topic in generative modeling [31, 51] and has been extensively studied in the GAN context [9, 30, 48, 49]. Various approaches have been developed on applications that include controllable image generation [9, 48], 3D-aware image generation [2, 16, 30, 49] and controllable video generation [24]. Usually, control at the mid-level is obtained in an unsupervised manner by building a spatially disentangled latent space. However, such techniques are not directly applicable to T2I diffusion models. Diffusion models employ a fixed forward process [15, 42], which constrains the flexibility of learning a spatially disentangled mid-level representation. In this work, we solve this issue by optimizing a layered scene representation during the diffusion sampling process. It is also noteworthy that recent works enable diffusion models to generate images grounded on given layouts [11, 20, 28, 52]. However, they do not focus on spatial disentanglement and do not guarantee similar content after rearranging layouts. 2.2. Diffusion-based Image Editing Off-the-shelf T2I diffusion models can be powerful image editing tools. With the help of inversion [26, 43] and subject-centric finetuning [12, 38], various approaches have been proposed to achieve image-to-image translation including concept replacement and restylization [7, 13, 19, 25, 45]. However, these approaches are restricted to inplace editing, and editing the spatial location of objects has been rarely explored. Moreover, many of the approaches exploit an attention correspondence [3, 10, 13, 45] or a feature correspondence [27, 41, 44] with the final image, making the approach dependent to a specific denoiser architec\ud835\udc66!: \u201cbed\u201d. \ud835\udc66\" : \u201cwooden cabinet\u201d. \ud835\udc66#: \u201dwindow\u201d. \ud835\udc66$: \u201cbedroom\u201d. Denoise Render Sample layouts Optimize \ud835\udc53(&) Layouts \ud835\udc63(&) \ud835\udc63 #(&(!) \u22ef Views Denoised views Layered Scene \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ \ud835\udc53(&(!) Updated Layered Scene \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ iii) SceneDiffusion \ud835\udc61= \ud835\udf0f \ud835\udc61= 0 Render ii) Inference Stage Image Diffusion \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ Optimized Layered Scene \ud835\udc61= \ud835\udc47 \ud835\udc61= \ud835\udf0f i) Optimization Stage \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ \ud835\udc53 # \ud835\udc53 \" \ud835\udc53 ! \ud835\udc53 $ Initial Layered Scene Scene Diffusion Optimized Layered Scene Figure 2. Method overview. Our framework has two stages: i) optimization stage, we optimize a layered scene representation with SceneDiffusion for T \u2212\u03c4 diffusion steps, and ii) inference stage, we render the optimized layered scene with \u03c4-step standard image diffusion. iii) SceneDiffusion updates the layered scene by denoising multiple randomly sampled layouts in parallel. In the illustration, the scene has 4 layers. Each layer consists of a feature map f, a mask m (shown as a box), and a text prompt y (shown at the bottom). At denoising step t, we randomly sample N layouts and render them to get different views v(t). We then denoise the views using a pretrained T2I diffusion model for one step to get \u02c6 v(t\u22121), which are used to update the feature maps f (t) \u2192f (t\u22121) in the layered scene. Note that boxes here only serve as a rough geometry of objects (like blobs in Epstein et al. [9]), and can be replaced by more accurate masks. ture. Compared with concurrent works on spatial image editing with diffusion models using self-guidance [10, 27] and feature tracking [41], our method is different in: 1) we generate scenes that preserve the content across different spatial editing, 2) we use an explicit layered representation that gives intuitive and precise control, and 3) we render a new layout via a short schedule of image diffusion, while guidance-based approaches require a long sampling schedule and feature tracking requires gradient-based optimization for each editing. 3. Our Approach Framework Overview. An overview of our framework is shown in Figure 2. In Section 3.1, we briefly introduce preliminary works on diffusion models and locally conditioned diffusion. Then, in Section 3.2, we present how we obtain a spatially disentangled layered scene with SceneDiffusion. Finally, in Section 3.3, we discuss how SceneDiffusion enables spatial editing on in-the-wild images. 3.1. Preliminary Diffusion Models. Diffusion models [15, 42] are a type of generative model that learns to generate data from a random input noise. More specifically, given an image from the data distribution x0 \u223cp(x0), a fixed forward noising process progressively adds random Gaussian noise to the data, hence creating a Markov Chain of random latent variable x1, x2, ..., xT following: q(xt|xt\u22121) = N(xt; p 1 \u2212\u03b2ixt\u22121, \u03b2tI), (1) where \u03b21, ...\u03b2T are constants corresponding to the noise schedule chosen so that for a high enough number of diffusion steps xT is assumed to be a standard Gaussian. We then train a denoiser \u03b8 that learns the backward process, i.e., how to remove the noise from a noisy input [15]. At inference time, we can sample an image by starting from a random standard Gaussian noise xT \u223cN(0; I) and iteratively denoise the image following the Markov Chain, i.e., by consecutively sampling xt\u22121 from p\u03b8(xt\u22121|xt) until x0: xt\u22121 = 1 \u221a\u03bbt \u0010 xt \u2212 1 \u2212\u03bbt p 1 \u2212\u00af \u03bbt \u03f5\u03b8(xt, t) \u0011 +\u03c3tz, (2) where z \u223cN(0, I), \u00af \u03bbt = Qt s=1 \u03bbs, \u03bbt = 1 \u2212\u03b2t, and \u03c3t is the noise scale. Locally Conditioned Diffusion. Various approaches [1, 33] have been proposed to generate partial image content based on local text prompts using pretrained T2I diffusion models. For K local prompts y = {y1, y2, ..., yK} and binary non-overlapping masks m = {m1, m2, ...mK}, locally conditioned diffusion [33] proposes to first predict a full image noise \u03f5\u03b8(xt, t, yk) for each local prompt yk with classifier-free guidance [14], and then assign it to its corresponding region masked by mk: \u03f5LCD \u03b8 (xt, t, y, m) = K X k=1 mk \u2299\u03f5\u03b8(xt, t, yk), (3) where \u2299is element-wise multiplication. 3.2. Controllable Scene Generation Given a list of ordered object masks and their corresponding text prompts, we would like to generate a scene where object locations can be changed on the spatial dimensions while keeping the image content consistent and high quality. We leverage a pretrained T2I diffusion model \u03b8 that generates in the image space (or latent space) I \u2208Rc\u00d7w\u00d7h, where c is the number of channels and w and h the width and height of the image, respectively. To achieve controllable scene generation, we introduce a layered scene representation in Section 3.2.1 for mid-level control and propose a new sampling strategy in Section 3.2.2. 3.2.1 Layered Scene Representation We decompose a controllable scene into K layers [lk]K k=1, ordered by the depth of the objects. Each layer lk has 1) a fixed object-centric binary mask mk \u2208{0, 1}c\u00d7w\u00d7h (e.g., a bounding box or segmentation mask) to show the geometric property of the object, 2) a two-element offset, ok \u2208[0; \u00b5k] \u00d7 [0; \u03bdk], indicating its spatial locations, with \u00b5k and \u03bdk defining the horizontal and vertical movement range, and 3) a feature map f (t) k \u2208Rc\u00d7w\u00d7h representing its visual appearance at diffusion step t. A scene layout is defined by the masks and their associated offsets. The offset ok of each layer can be sampled from the movement range [0; \u00b5k] \u00d7 [0; \u03bdk] to form a new layout. Specially, we set the last layer lK as the background so that mK = {1}c\u00d7w\u00d7h and oK = [0, 0]. Given a layout, the layered representation can be rendered to an image, and we name the image as a view. Similar to prior works in controllable scene generation [9] and video editing [18], we use \u03b1-blending to composite all the layers during rendering. More concretely, the view v(t) can be calculated as: v(t) = K X k=1 \u03b1k \u2299move(f (t) k , ok), \u03b1k = move(mk, ok) k\u22121 Y j=1 (1 \u2212move(mj, oj)). (4) Each element in \u03b1k \u2208{0, 1}w\u00d7h indicates that the visibility of that location in the k-th latent feature map, and the function move(\u00b7, o) means that we spatially shift the values of the feature map f or mask m by o. The rendering process can be applied to the layered scene at any diffusion step, resulting in a view with a certain noise level. For initialization at diffusion step T, the initial feature map f (T ) k is independently sampled from a standard Gaussian noise N(0, I) for each layer. It can be shown that since \u03b1 is binary and PK k=1 \u03b12 k = 1, the rendered views from the initial layered scene still follow the standard Gaussian distribution. This allows us to denoise the views directly using pretrained diffusion models. In Section 3.2.2, we discuss how to update f (t) k in a sequential denoising process. 3.2.2 Generating Scenes with SceneDiffusion We propose SceneDiffusion to optimize the feature maps in the layered scenes from Gaussian noise. Each SceneDiffusion step 1) renders multiple views from randomly sampled layouts, 2) estimates the noise from the views, and then 3) updates the feature maps. Specifically, SceneDiffusion samples N groups of offset [o1,n, o2,n, \u00b7 \u00b7 \u00b7 , oK,n]N n=1, with each offset ok,n being an element of the movement range [0; \u00b5k] \u00d7 [0; \u03bdk]. This leads to N layout variants. A higher number of layouts helps the denoiser locate a better mode while also increasing the computational cost, as shown in Section 4.2. From the K latent feature maps, we render the layouts as N views vn \u2208{v(t) 1 , ..., v(t) N }: v(t) n = K X k=1 \u03b1k \u2299move(f (t) k , ok,n). (5) Then, we stack all views in each SceneDiffusion step and predict the noise {\u02c6 \u03f5(t) n }N n=1 using locally conditioned diffusion [33] described in Equation 3: \u02c6 \u03f5(t) n = \u03f5LCD \u03b8 (v(t) n , t, m, y), \u2200n \u2208{1, 2, \u00b7 \u00b7 \u00b7 , N} (6) where m are the object masks, and y are local text prompts for each layer. Since we can run multiple layout denoising in parallel, computing {\u02c6 \u03f5(t) n }N n=1 brings little time overhead, while costing an additional memory consumption proportional to N. We then update the views v(t) n from the estimated noise \u02c6 \u03f5(t) n using Equation 2 to get \u02c6 v(t\u22121) n . Since each view corresponds to a different layout and is denoised independently, conflict can happen in overlapping mask regions. Therefore, we need to optimize each feature map f (t\u22121) k so that the rendered views from Equation 5 is close to denoised views: f (t\u22121) = arg min f (t\u22121) N X n=1 ||\u02c6 v(t\u22121) n \u2212v(t\u22121) n ||2 2 (7) This least square problem has the following closed-form solution: f (t\u22121) k = PN n=1 move(\u03b1k \u2299\u02c6 v(t\u22121) n , \u2212ok,n) PN n=1 move(\u03b1k, \u2212ok,n) , \u2200k \u2208{1, \u00b7 \u00b7 \u00b7 , K}, (8) where move(x, \u2212o) denotes the values in x translated in the reverse direction of o. The derivation for this solution is similar to the discussion in Bar-Tal et al. [1]. The solution essentially sets f (t\u22121) k to a weighted average of cropped denoised views. 3.2.3 Neural Rendering with Image Diffusion We switch to vanilla image diffusion for \u03c4 steps after running SceneDiffusion for T \u2212\u03c4 steps. Since the layer masks m like bounding boxes only serve as a rough mid-level representation instead of an accurate geometry, this image diffusion stage can be viewed as a neural renderer that maps mid-level control to the image space [9, 30, 49]. The value of \u03c4 trades off the image quality and the faithfulness to the layer mask. A value of \u03c4 in 25% to 50% of the total diffusion steps strikes the best balance, which usually costs less than a second using a popular 50-step DDIM scheduler [43]. The global prompt used for the image diffusion stage can be separately set. In this work, we mainly set the global prompt to the concatenation of local prompts in the depth order yglobal =< y1, y2, . . . , yK > and find this simple strategy sufficient in most cases. 3.2.4 Layer Appearance Editing The appearance of each layer can be edited individually via modifying local prompts. Objects can be restyled or replaced by changing the local prompt to a new one and then performing SceneDiffusion using the same feature map initialization. 3.3. Application to Image Editing SceneDiffusion can be conditioned on a reference image by using its sampling trajectory as an anchor, allowing us to change the layout of an existing image. Concretely, when a reference image is given along with an existing layout, we set the reference image to be the optimization target at the final diffusion step, i.e., an anchor view denoted as \u02c6 v(0) a . Then, we add Gaussian noise to this view at different diffusion noise levels, creating a trajectory of anchor views at different denoising steps. \u02c6 v(t) a = p 1 \u2212\u03b2t\u02c6 v(0) a + \u03b2t\u03f5, \u2200t \u2208[1, \u00b7 \u00b7 \u00b7 , T], (9) where \u03f5 \u223cN(0, 1). In each diffusion step, we use the corresponding anchor view \u02c6 v(t) a to further constraint f (t\u22121), which leads to an extra weighted term in Equation 7: f (t\u22121) = arg min f (t\u22121) . X n wn||\u02c6 v(t\u22121) n \u2212v(t\u22121) n ||2 2 wn = ( w if n = a, 1 otherwise. (10) where n \u2208{1, \u00b7 \u00b7 \u00b7 , N} \u222a{a}, and w controls the importance of \u02c6 v(t) a . A large enough w produces good faithfulness to the reference image, we set w = 104 in this work. The closed-form solution of this equation is similar to Equation 8 and can be found in supplementary material. 4. Experiments 4.1. Experimental Setup We evaluate our method both qualitatively and quantitatively. For quantitative study, a thousand-scale dataset is required to effectively measure metrics like FID. However, populating semantically meaningful spatial editing pairs for multi-object scenes is challenging, particularly when inter-object occlusions should be considered. Therefore, we restrict quantitative experiments to single-object scenes. Please refer to qualitative results for multi-object scenes. Dataset. We curate a dataset of high-quality, subjectcentric images associated with image captions and local descriptions. Object masks are also annotated automatically using GroundedSAM [35]. We first generate 20,000 images from 1,000 image captions and then apply a rule-based filter to remove low-quality images, which results in 5,092 images in total. Object masks and local descriptions are then automatically annotated. Metrics. Our main metrics for controllable scene generation are Mask IoU, Consistency, Visual Consistency, LPIPS, and SSIM. Mask IoU measures the alignment between the target layout and the generated image. Other metrics compare multiple generated views in the same scene and evaluate their similarity: Consistency for mask consistency, Visual Consistency for foreground appearance consistency, LPIPS for perceptual, and SSIM for structural changes. Moreover, in the image editing experiment, we report FID to measure the similarity of the edited images to the original ones for image quality quantification. Implementation By default we set N = 8 in our experiments. For quantitative studies, all experiments are averaged on 5 random seeds. Please refer to our supplemental document for more information on our dataset construction, metrics selection, standard deviations of experiments and implementation details. 4.2. Controllable Scene Generation Setting. We randomly place an object mask at different positions to form random target layouts. Images should be generated conditioned on the target layouts and local prompts, and the content is expected to be consistent in different layouts. The object masks are from the aforementioned curated dataset. To reduce the chance that objects move out of the canvas, we restrict the maks position to a square centered at the original position with its side length of 40% of the image width. A visual example can be found in Figure 9. Move all Add bed & window Shrink and clone bed \u201cbed, wooden cabinet, window, bedroom\u201d Move all Add blue yarn ball Shrink and clone blue yarn ball \u201cyellow yarn ball, blue yarn ball, British shorthair, cat house\u201d Figure 3. Sequential manipulations. Our generated scenes can be manipulated by operating on layers sequentially. b) Move Up c) Move down d) Move left e) Move right f) Shrink g) Clone a) Original \u201ca photo of a fluffy cat sitting on a museum bench looking at an oil painting of cheese\u201d \u201ca photo of a raccoon in a barrel going down a waterfall\u201d \u201cdistant shot of the Tokyo tower with a massive sun in the sky\u201d Figure 4. Object moving. Our approach can be employed to move objects on a given image. Edited objects are shown in bold in the prompts. Examples are borrowed from Epstein et al. [10] and no access to the initial latent noise is assumed. All layouts for each example are generated from the same scene. As a result, our approach keeps the overall content consistent across different editings, which most prior works fail to achieve. A full comparison with prior works can be found in appendix. Baselines. We compare our approach to MultiDiffusion [1], which is a training-free approach that generates images conditioned on masks and local descriptions. We use a 20% solid color bootstrapping strategy following their protocol. Foreground and background noise are fixed in the same scene for better consistency. Results. We present quantitative results in Table 1, which show that SceneDiffusion outperforms MultiDiffusion on all metrics. For qualitative study, we show the results of sequentially manipulation our generated scenes in Figure 3. Table 1. Quantitative comparison for controllable scene generation. \u2020: without the solid color bootstrapping strategy. Method M. IoU \u2191 Cons.\u2191 V. Cons.\u2193 LPIPS \u2193 SSIM \u2191 MultiDiff. [1]\u2020 0.263 0.257 0.521 0.450 MultiDiff. [1] 0.466 0.436 0.236 0.519 0.471 Ours\u2020 0.310 0.609 0.198 0.761 Ours 0.522 0.721 0.112 0.215 0.762 4.3. Object Moving for Image Editing Setting. Given a reference image, an object mask, and a random target position, the goal is to generate an image where the object has moved to the target position while window round window wooden cabinet sliding door cabinet bed spindle bed bedroom Bohemian bedroom bed blue bed wooden cabinet glass cabinet window modern window bedroom romantic bedroom Figure 5. Restyling objects. Adding style description to the layer prompt restyles the object when fixing the initial noise. The circular arrow shows the restyled object. armchair sofa bed table wooden cabinet balcony wooden wardrobe bedroom plants pendant lights mirror bookshelf bedroom window Figure 6. Replacing objects. Objects can be changed to different objects by modifying their layer prompts without affecting other objects in the scene. The circular arrow shows the replaced object. Table 2. Quantitative comparison for object moving. \u2020: specialized inpainting model trained with masking. Method FID \u2193 M. IoU \u2191 V. Cons. \u2193 LPIPS \u2193 SSIM \u2191 RePaint [23] 10.267 0.620 0.166 0.278 0.671 Inpainting\u2020 6.383 0.747 0.112 0.264 0.680 Ours 5.289 0.817 0.075 0.263 0.709 keeping the rest of the content similar. The aforementioned range is used to prevent moving the object out of the canvas. Baselines. We compare with inpainting-based approaches. We first crop the object from the reference image, paste it to the target location, and then inpaint the blank areas. We dilate the edge of objects for 30 pixels to better blend the object with the background. We compare our approach with two inpainting models: a standard T2I a) Scene A b) Scene B c) Mixed Take bed Take macaron Figure 7. Mixing scenes. One may mix scenes by copying a layer from one scene and pasting it in another scene. diffusion model using the RePaint technique [23], and a specialized inpainting model trained with masking. We set all local layer prompts in our approach to the global image caption for a fair comparison. Results. We report quantitative results in Table 2. Our approach outperforms both inpainting-based baselines by a clear margin on all metrics. Qualitative results of object moving are shown in Figure 4. 4.4. Layer Appearance Editing We show the results of object restyling in Figure 5 and object replacement in Figure 6. We observe that changes are mostly isolated to the selected layer, while other layers slightly adapt to make the scene more natural. Furthermore, layer appearance can be transferred across scenes by directly copying a layer from one scene to another, as shown in Figure 7. a) Original b) Edited (\ud835\udf0f= 25) c) Edited (\ud835\udf0f= 15) \u201ca burger and an ice cream cone floating in the ocean\u201d Figure 8. Ablation on \u03c4. We swap the locations of the two objects. Stopping SceneDiffusion at a later step improves consistency and prevents hallucination. a) Mask c) Ours b) MultiDiffusion \u201ca bag, a sunny day after the snow\u201d Figure 9. Qualitative evaluation of controllable scene generation. Multidiffusion [1] is able to generate a backpack in accordance to the target mask, but both the background and the object change at different layouts. Our method can produce coherent and consistent images with minimal visual appearance difference. Table 3. Component analysis. Method CLIP-a \u2191VC \u2193M. IoU \u2191Cons.\u2191LPIPS \u2193SSIM \u2191 Ours (N=8, \u03c4=13) 6.12 0.11 0.51 0.72 0.22 0.74 w/o multiple layouts 6.05 0.23 0.46 0.43 0.51 0.47 w/o random sampling 5.98 0.12 0.50 0.68 0.22 0.75 w/o image diffusion 5.96 0.09 0.51 0.72 0.21 0.76 Table 4. Analysis on N and \u03c4 N \u03c4 Optim.\u2193Infer.\u2193CLIP-a \u2191M. IoU \u2191Cons.\u2191LPIPS \u2193SSIM \u2191 8 13 17.3s 0.82s 6.12 0.514 0.721 0.224 0.749 4 13 9.65s 0.82s 5.99 0.491 0.689 0.225 0.747 2 13 5.73s 0.82s 5.97 0.481 0.672 0.229 0.735 8 25 12.0s 1.53s 6.13 0.502 0.643 0.276 0.685 8 0 22.9s 0.0s 5.96 0.515 0.723 0.211 0.767 4.5. Ablation study In Table 3, we ablate all components. We additionally measure CLIP-aesthetic (CLIP-a) following [1] to quantify the image quality. Without jointly denoising multiple layouts, all metrics drop drastically. With a deterministic sampling of layouts, the image quality degrades. Without the image diffusion stage, although consistency metrics slightly improve, image quality significantly deteriorates. In Table 4, we analyze the effect of the number of views and image diffusion steps. We observe that having more views and more SceneDiffusion steps leads to a better disentanglement between the object and the background, as indicated by higher Mask IoU and Consistency. A qualitative comparison can be found in Figure 8. We also present the accuracy-speed trade-off when limiting to a single 32GB GPU. Larger N increases the optimization time. Larger \u03c4 increases the inference time. For all ablation experiments, we use a randomly selected 10% subset for easier implementation. 5. Conclusion We proposed SceneDiffusion that achieves controllable scene generation using image diffusion models. SceneDiffusion optimizes a layered scene representation during the diffusion sampling process. Thanks to the layered representation, spatial and appearance information are disentangled which allows extensive spatial editing operations. Leveraging the sampling trajectory of a reference image as an anchor, SceneDiffusion can move objects on in-the-wild images. Compared to baselines, our approach achieves better generation quality, cross-layout consistency, and running speed. Limitations. The object\u2019s appearance may not fit tightly to the mask in the final rendered image. Besides, our approach requires a large amount of memory to simultaneously denoise multiple layouts, restricting the applications in resource-limited user cases. Acknowledgments. This study is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-08-018), the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP202210012), NTU NAP, and under the RIE2020 Industry Alignment Fund \u2013 Industry Collaboration Projects (IAF-ICP) Funding Initiative." + }, + { + "url": "http://arxiv.org/abs/2404.05152v1", + "title": "A Fast Analytical Model for Predicting Battery Performance Under Mixed Kinetic Control", + "abstract": "The prediction of battery rate performance traditionally relies on\ncomputation-intensive numerical simulations. While simplified analytical models\nhave been developed to accelerate the calculation, they usually assume battery\nperformance to be controlled by a single rate-limiting process, such as solid\ndiffusion or electrolyte transport. Here, we propose an improved analytical\nmodel that could be applied to battery discharging under mixed control of mass\ntransport in both solid and electrolyte phases. Compared to previous\nsingle-particle models extended to incorporate the electrolyte kinetics, our\nmodel is able to predict the effect of salt depletion on diminishing the\ndischarge capacity, a phenomenon that becomes important in thick electrodes\nand/or at high rates. The model demonstrates good agreement with the full-order\nsimulation over a wide range of cell parameters and offers a speedup of over\n600 times at the same time. Furthermore, it could be combined with\ngradient-based optimization algorithms to very efficiently search for the\noptimal battery cell configurations while numerical simulation fails at the\ntask due to its inability to accurately evaluate the derivatives of the\nobjective function. The high efficiency and the analytical nature of the model\nrender it a powerful tool for battery cell design and optimization.", + "authors": "Hongxuan Wang, Fan Wang, Ming Tang", + "published": "2024-04-08", + "updated": "2024-04-08", + "primary_cat": "cond-mat.mtrl-sci", + "cats": [ + "cond-mat.mtrl-sci", + "physics.chem-ph" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The popularity of rechargeable lithium-ion batteries (LIB) has experienced an explosive growth since their first commercialization in the 1990s. Compared to the other energy storage systems, LIBs exhibit relatively high energy density and decent cycle life, enabling a wide range of applications from consumer electronics to electric mobility. With the demand for LIBs increasing rapidly because of the wider adoption of elec- trical vehicles and grid energy storage systems, the development of next-generation LIBs with significantly improved energy density, rate performance, safety, cycling stability and fast-charging capability is highly sought-after. The success of this effort requires continued optimization of battery structure across different length scales from the particle to the cell and pack levels. For instance, the replacement of polycrystalline LiNi1\u2212x\u2212yMnxCoyO2 with single crystalline particles could lead to significantly better cycle life and higher tap density,1\u20133 and the design and fabrication of thick battery electrodes, some with three-dimensional archi- tecture4\u20138 to facilitate ionic transport, are under intense study because they could increase the energy density of batteries by increasing the amount of active materials relative to the inactive components.9\u201311 The large design parameter space inherent in LIBs makes battery modeling an essential tool for their design and optimization to unravel the complex dynamics and reduce the testing turnaround time. While battery simulations that explicitly resolve the electrode microstructure have gained traction in recent years and could provide valuable insights,12\u201314 their high computation cost still prevents their extensive use. Instead, a standard modeling approach is numerical simulations based on the porous electrode theory pioneered by Newman and coworkers.15\u201319 These simulations are commonly referred to as pseudo-two-dimensional (P2D) simulations as they couple one-dimensional electrolyte transport at the macroscopic electrode level and one-dimensional lithium diffusion within the solid phase at the microscopic particle level. P2D simulations provide a comprehensive description of the reaction-transport kinetics in battery cells, but the highly-coupled governing equations are computationally expensive to solve. While various numerical techniques have been employed to accelerate P2D simulations, such as proper orthogonal decomposition20 and orthogonal col- location,21,22 it remains challenging to use this approach to explore the large parameter space or perform large-scale optimization. Another common approach involves building equivalent circuit models (ECMs) for LIBs.23\u201325 This approach requires empirically fitting the parameters in ECMs to experimental data to reflect the inner workings of the battery cell. Nevertheless, the lack of physical insight makes such models prone to errors when extrapolated to conditions beyond the scope of the fitting data. Because the model parame- ters do not translate easily to materials properties, it is also difficult to use ECMs to optimize the battery 2 structure. As an intermediate between the P2D simulation and the ECMs, simplified physics-based models are another candidate for battery modeling. These models are more efficient to solve than the full-order numerical simulations and also provide direct insight into the structure-property relation of batteries than the (semi-)empirical models. In simplified physics-based models, the system complexity is typically reduced by assuming a predominant rate-limiting process. For example, the widely used single particle model (SPM) assumes that the elec- trolyte transport is facile and all of the active material particles are uniformly (de)lithiated. As such, the (dis)charging behavior of the electrode is approximately simulated by solving the solid diffusion equation for a single particle coupled with the charge transfer process at the particle/electrolyte interface.26\u201328 On the other end, Wang and Tang recently proposed an analytical model to predict the battery rate performance when it is controlled by the electrolyte transport.29 In this kinetic regime, the reduction of usable capacity with increasing discharging rates is caused by the lithium salt depletion in the electrolyte. As a result, active material particles in the salt-depletion region could not be fully lithiated, leading to incomplete dis- charge. The Wang-Tang model assumes fast solid diffusion and that the active material particles are either completely unlithiated or lithiated depending on whether they reside outside the salt depletion zone (DZ) or not. Reduction of the model complexity is realized by the assumptions of the steady-state electrolyte transport and characteristic reaction distributions. The discharge capacity is estimated from the width of the electrolyte penetration zone LPZ, in which the salt concentration is non-zero, as LPZ/Lcat, where Lcat is the cathode thickness. While the aforementioned models focusing on a single rate-limiting step have proven effective in their respec- tive applicable regimes, the (dis)charge behavior of LIBs is often determined jointly by the solid diffusion and electrolyte transport kinetics, which renders such models insufficient. The accuracy of SPM becomes unsatisfactory when the electrodes are thick and/or the discharge rate is large, where electrolyte diffusion becomes relatively sluggish. The Wang-Tang model is prone to larger errors when the active material particle size is large and the solid diffusion process could no longer be neglected. With the relentless pursuit for better battery performance and economics, these scenarios are becoming increasingly relevant as the industry is pushing for thicker electrodes to enhance the energy density and the manufacturing efficiency also favors the use of larger particles to increase the tap density. A number of models have been developed in recent years to extend the SPM to consider electrolyte dynam- ics,30\u201334 which is usually achieved by assuming certain simplified forms of the reaction distribution within the 3 electrode. Moura et al. derived an extended SPM that incorporates the electrolyte phase potential drop into the cell voltage based on the assumption of uniform Li intercalation flux.32 In another extension, Rahimian et al. approximate the electrolyte concentration and potential with third-order polynomials and fit the poly- nomial coefficients to full-order P2D simulations.30 Luo et al. extended the SPM in a different approach, which assumes that the spatial distribution of the open circuit potential (OCP) of active material particles could be described by an exponential function.31 The parameters in the function are calculated from fitting to the OCPs of three representative particles at different thickness of the electrode. While these extended SPM demonstrate improved predictions, they could not be applied to the situation where salt depletion develops in the electrolyte, which becomes significant at large electrode thickness and/or (dis)charging rates. The need for parameter fitting in some of the models also make them less general and solvable by numerical methods only. In this work, we propose an improved physics-based analytical model for predicting the battery discharge process under the mixed kinetic control of salt depletion and solid diffusion. The model is suitable for ac- tive materials that display solid solution behavior upon (dis)charging, such as the layered transition metal oxides Li(Ni1\u2212x\u2212yMnxCoy)O2 (NMC) or Li(Ni1\u2212x\u2212yCoxAly)O2 (NCA) that are widely used in commer- cial LIBs. Because the OCP of these materials is sensitive to the lithium stoichiometry or state of charge (SoC), the lithium intercalation flux tends to be more homogeneously distributed across the electrode than the phase-changing electrode materials such as LiFeO4 (LFP) and Li4Ti5O12 (LTO), which exhibit a prop- agating reaction front. Therefore, they are idealized as the uniform-reaction-type (UR) electrodes in the Wang-Tang model. The current model consists of an electrolyte transport and a solid phase module to capture the two coupled processes. The electrolyte module extends the Wang-Tang model to handle concentration-dependent electrolyte properties and spatially-varying electrode properties (e.g. porosity). It is used to predict the width of the electrolyte penetration zone (PZ) and the spatial distributions of the salt concentration cl and electrolyte potential \u03a6l within the PZ. The new solid phase module calculates the OCP distribution of the active material in the PZ, and estimates the average lithium concentration cs in electrode particles via the solution to the solid diffusion equation. The cell-level depth of discharge (DoD) is then calculated as a function of the cell voltage from the integration of cs in space. Compared to the original UR model,29 the new model not only addresses the effect of solid diffusion on the rate performance but also predicts the discharge voltage curve in the presence of salt depletion. We name it the URCs model, recognizing its applicability to UR-type electrodes and the additional calculation of the lithium concentration 4 distribution in the solid phase. The URCs model is compared with P2D simulations over a wide range of cell parameters to examine its performance. The discharge capacity and energy output predicted by the model exhibit very good agreement with the simulated results with an average error less than 10%, excluding the low DoD regime in cells containing the graphite anode, which is known for its non-UR behavior. At the same time, it offers a speedup of over 600 folds versus the state-of-the-art P2D solvers. More significantly, the URCs model, unlike numerical simulation, is able to work in synergy with gradient-based optimization algorithms to efficiently locate optimal battery cell parameters because it permits the accurate evaluation of the objective function gradients. A hybrid global optimization scheme, which employs the URCs model for rapid scan of the parameter space and P2D simulation for refined local search, is demonstrated with both speed and accuracy. These advantages render the URCs model a useful tool for battery structure design as well as onboard applications.", + "main_content": "2.1 Uniform-Reaction (UR) and Moving-Zone-Reaction (MZR) Behavior In this section, we provide a concise review of the two distinct types of reaction distribution within electrodes that underpin the simplifying assumptions in the Wang-Tang model:29 the uniform-reaction (UR) and the moving-zone-reaction (MZR) behavior. UR and MZR behaviors manifest in battery chemistries with OCPs that are sensitive and insensitive to lithium stoichiometry, respectively. Cathode materials such as solid-solution-like transition metal oxides, including NMC and NCA, commonly exhibit UR behavior, while materials that undergo pronounced first-order phase transitions during (de)lithiation, exemplified by LFP, typically demonstrate MZR behavior. As a basis for deriving the UR and MZR models, Wang and Tang observed from P2D simulations that the salt concentration cl in the electrolyte reaches a pseudo-steady-state during the discharge process. For URtype cathodes, this pseudo-steady-state becomes apparent shortly after discharge begins. Figure 1a shows a typical salt concentration profile along electrode thickness direction for UR-type half cells in the middle of discharge. Salt concentration cl gradually declines from the cathode-separator interface towards the cathode current collector, reaching near-zero levels midway. This distribution delineates two distinct regions within the cathode: a salt penetration zone (PZ) where the salt concentration is non-zero and a salt depletion zone 5 (DZ) characterized by diminishing lithium supply for electrode particle lithiation. Notably, the reaction flux remains relatively uniform in the PZ for a significant duration of the discharge process, but minimal intercalation occurs in the DZ. As depicted in Figure 1c, the reaction front for an idealized UR-type battery uniformly spans the entire PZ during discharge from t = 0 to tend, leaving particles in the DZ unreacted. The reaction concludes when all particles within the PZ are completely lithiated. The size of the penetration zone, denoted as LPZ and highlighted in the schematics, therefore indicates the degree of electrode utilization during discharge. Figure 1: Discharge characteristics of battery electrodes exhibiting uniform-reaction (UR) and moving-zone-reaction (MZR) behavior. a Pseudo-steady-state salt distribution within an idealized UR-type half cell in the middle of discharge. X denotes the distance from the cathode current collector. The cathode is separated into a salt penetration zone (PZ) and a salt depletion zone (DZ) based on salt availability in electrolyte. LPZ marks the size of the PZ and serves as a proxy to the (de)lithiation state. b Pseudo-steady-state salt distribution in an idealized MZR-type half cell toward the end of discharge. c, d Idealized reaction distribution in electrodes exhibiting UR and MZR behavior, respectively. The discharge process starts at time t = 0 and concludes at tend. In a MZR-type half cell, the salt concentration profile evolves gradually during discharge. Initially, reaction commences at the separator end and progresses toward the current collector. As discharge proceeds, salt 6 from the unreacted portion of the electrode is consumed to fuel intercalation until depletion occurs in the electrolyte, marking the emergence of the pseudo-steady-state profile. Figure 1b illustrates a typical salt concentration distribution towards the end of discharge for MZR-type half cells. In contrast to its UR counterpart, which maintains a relatively stable salt concentration distribution for much of the discharge process, the pseudo-steady-state salt concentration characteristic of MZR behavior becomes prominent only in the late stage of discharge. The electrode reaction distribution is depicted in Figure 1d, where a sharp reaction front divides the electrode into a fully lithiated region (PZ) and a fully delithiated region (DZ). At any given time t, only particles located at the reaction front undergo lithium intercalation. After these particles are fully lithiated, the intercalation flux peak shifts toward the current collector, initiating reaction in other particles. When the reaction front propagates, salt concentration outside the lithiated region is reduced and eventually reaches zero, which marks the occurrence of complete salt depletion in the DZ and the termination of discharge. As before, electrode particles in the DZ remain fully charged and those in the PZ are fully discharged. In full cells, reaction distribution in the anode can also be categorized as UR and/or MZR types. Graphite (Gr) anode, for example, exhibits a mixed reaction type owing to the shape of its OCP. At the onset of delithiation, graphite undergoes the LiC6 \u2192LiC12 staging transition at an OCP of approximately 0.05V, demonstrating MZR behavior. After DoD rises above 30%, graphite exhibits solid solution behavior, signifying a transition to UR type behavior. Since there is no depletion of salt in the anode during discharge, graphite particles tend to react uniformly in the entire anode when the final DoD is not too small. Consequently, we consider the graphite anode as a UR type electrode in this study. It is important to note, however, that this simplification may introduce foreseeable errors when graphite operates in the MZR regime. 2.2 The URCs Model 2.2.1 Electrolyte Transport Module The electrolyte transport module in the URCs model solves for the steady-state salt concentration profile in a similar way as the Wang-Tang model, but considers the more general situation where the electrolyte properties are concentration dependent and the porous electrode could have a heterogeneous structure with spatially-dependent properties, e.g. graded porosity, tortuosity and particle size. We begin with the mass 7 conservation equation and the current continuity equation from the porous electrode theory: \u03f5p(x)\u2202cl \u2202t = \u2207\u00b7 \" Damb(cl)\u03f5p(x) \u03c4p(x) \u2202cl \u2202x + (1 \u2212t+(cl)) \u20d7 i(x) F # (1) \u2207\u00b7\u20d7 i(x) = \u2212Fap(x)jin(x) (2) where cl is the salt concentration in the electrolyte and \u20d7 i(x) is the ionic current density. The above equations do not apply to the electrolyte DZ, where cl and \u20d7 i are assumed to be zero throughout the discharge process. Electrode porosity \u03f5p, tortuosity \u03c4p, and volumetric surface area ap = 3(1\u2212\u03f5p)/rp are characteristic to the specific regions of the battery cell p \u2208{cat, an, sep} representing the cathode, the anode, or the separator. Under the assumption of steady-state electrolyte transport, Equation 1 reduces to: \u2207\u00b7 \u0014 Damb(cl)\u03f5p(x) \u03c4p(x) \u2202cl \u2202x \u0015 = \u2212\u2207\u00b7 \" (1 \u2212t+(cl)) \u20d7 i(x) F # (3) Let x = 0 be at the interface between the cathode and current collector. At the PZ/DZ boundary x = Lcat \u2212LPZ, where Lcat and LPZ are the cathode and penetration zone thickness, respectively, the salt concentration gradient and the ionic current are both zero. We can therefore integrate Equation 3 to acquire: Damb(cl)\u03f5p(x) \u03c4p(x) \u2202cl \u2202x = \u2212(1 \u2212t+(cl)) \u20d7 i(x) F (4) Similarly, an explicit expression of the ionic current density could be obtained from the integration of Equation 2: \u20d7 i(x) = \u2212 x Z Lcat\u2212LPZ Fap(z)jin(z) dz (5) Combining Equations 4 and 5, we couple the salt concentration with local reaction flux: Damb(cl)\u03f5p(x) \u03c4p(x) \u2202cl \u2202x = (1 \u2212t+(cl)) x Z Lcat\u2212LPZ ap(z)jin(z) dz (6) The above equation could be integrated again as: cl Z 0 Damb(c\u2032 l) 1 \u2212t+(c\u2032 l) dc\u2032 l = x Z Lcat\u2212LPZ dy \u03c4p(y) \u03f5p(y) y Z Lcat\u2212LPZ ap(z)jin(z) dz (7) 8 If we define the left-hand side of Eq. 7 as a new function G: G(cl) = cl Z 0 Damb(c\u2032 l) 1 \u2212t+(c\u2032 l) dc\u2032 l (8) which is solely determined by the electrolyte properties, the salt concentration profile could then be expressed in terms of its inverse function: cl(x) = G\u22121\u0010 x Z Lcat\u2212LPZ dy \u03c4p(y) \u03f5p(y) y Z Lcat\u2212LPZ ap(z)jin(z) dz \u0011 (9) Under the UR assumption, the reaction flux ap(x)jin(x) is proportional to the local volumetric fraction of the active materials \u03bdp(x). Its general expression and simplification for constant electrode porosity are given in Table 1. Location in Cell Cathode x\u2208[Lcat\u2212LPZ,Lcat] Anode x\u2208[Lcat+Lsep,Lcat+Lsep+Lan] Separator x\u2208[Lcat,Lcat+Lsep] General Expression I\u03bdcat(x) F R Lcat Lcat\u2212LPZ \u03bdcat(z) dz \u2212 I\u03bdan(x) F R Lcat+Lsep+Lan Lcat+Lsep \u03bdan(z) dz 0 Constant Electrode Porosity I FLPZ \u2212 I FLan 0 Table 1: Expression of the reaction flux apjin The salt concentration profile cl(x) given by Eq. 9 depends on the penetration zone width LPZ, which remains unknown up to this point. We close the loop by determining LPZ from the conservation of salt in the electrolyte: Lcat+Lsep Z Lcat\u2212LPZ \u03f5p(x)cl(x) dx = cl0 Lcat+Lsep Z Lcat\u2212LPZ \u03f5p(x) dx (Half Cell) (10) Lcat+Lsep+Lan Z Lcat\u2212LPZ \u03f5p(x)cl(x) dx = cl0 Lcat+Lsep+Lan Z Lcat\u2212LPZ \u03f5p(x) dx (Full Cell) (11) where cl0 is the average salt concentration. 9 With cl(x) known, the steady-state electrolyte potential profile \u03a6l(x) could be calculated from the ionic current expression: \u20d7 i(x) = \u2212\u03f5p(x) \u03c4p(x) \u0014 \u03ba(cl)\u2202\u03a6l \u2202x \u22122RT(1 \u2212t+(cl))\u03ba(cl) Fcl \u0012 1 + \u2202ln f\u00b1(cl) \u2202ln cl \u0013 \u2202cl \u2202x \u0015 (12) where \u03ba is the ionic conductivity and 1 + \u2202ln f\u00b1(cl) \u2202ln cl is the thermodynamic factor of the electrolyte. Using Equation 4 to eliminate dcl/dx in Equation 12, we obtain: \u2202\u03a6l \u2202x = \u2212\u03c4p(x)\u03c9(cl) \u03f5p(x)\u03ba(cl) \u20d7 i(x) (13) with \u03c9(cl) \u22611 + 2RT\u03ba(cl)(1 + \u2202ln f\u00b1(cl) \u2202ln cl )(1 \u2212t+(cl))2 F 2clDamb(cl) (14) \u03a6l could be solved by replacing \u20d7 i with Eq. 5 in Eq. 13 and then integrating the equation: \u03a6l(x) = x Z Lcat+Lsep \u03c4p(y)\u03c9(cl(y)) dy \u03f5p(y)\u03ba(cl(y)) y Z Lcat\u2212LPZ Fap(z)jin(z)dz (15) Note that \u03a6l at the separator/anode interface x = Lcat + Lsep is set to be zero here. 2.2.2 Solid Phase Module In the previous Wang-Tang model,29 all the active material particles in the PZ are assumed to be fully reacted at the end of discharge. The normalized discharge capacity is thus given by DODf = LPZ/Lcat. In the URCs model, the electrode particles in the PZ may have <100% depth of discharge (DoD) due to slow solid diffusion while the particles in the DZ are still assumed to be fully non-reacted. For reason to be discussed below, we also do not assume the electrode particles in the PZ to have the same DoD. According to the Butler-Volmer equation, the reaction flux jin is controlled by the overpotential \u03b7, which is given by: \u03b7 = \u03a6s(x) \u2212\u03a6l(x) \u2212Ueq(css) (16) where Ueq is the local OCP of the electrode particles, which is a function of the lithium surface concentration css. Because \u03a6l(x) and \u03a6s(x) vary spatially, Ueq must have a gradient within the electrode when a uniform jin or \u03b7 develops. For UR-type electrodes, this means the presence of a css gradient as their Ueq is sensitive 10 to lithium stoichiometry. Therefore, uniform reaction is not the same as uniform SoC. A transient period must precede the UR behavior during discharge to establish such an SoC gradient. The Wang-Tang model neglects this transient period and assumes that UR occurs throughout the discharge process so that all the particles reach 100% DoD simultaneously. Here we remove this approximation and allow the particle-level SoC (or DoD) to be non-uniform within the PZ in order to further improve the prediction accuracy. In the solid phase module, we first determine the spatial distribution of css within the PZ from Eq. 16, and then correlate css to the particle-level DoD through the solution to the solid-diffusion equation. The electrode-level DoD as a function of the cell voltage is then obtained by integration. Using the expression of jin given in Table 1, the overpotential \u03b7(x) is solved from the Butler-Volmer equation: \u03b7 = \u22122RT F sinh\u22121(Fjin 2i0 ) (17) where the anodic/cathodic transfer coefficients are assumed to be 1/2. The exchange current density i0 is expressed as: i0(x) = Fk0 p cl(x)css(csmax \u2212css) (18) where k0 is the reaction rate constant. As an approximation, css is replaced with (csmax,cat + cs0,cat)/2 for the cathode and cs0,an/2 for the anode in full cell. For simplicity, we shall also assume that the electrical conductivity in the solid phase is suffiently high so that the solid potential is uniform across the cathode and anode: \u03a6s = \u03a6s,cat/an, where \u03a6s,cat/an represent the cathode/anode terminal potentials. This assumption could be easily relaxed. By substituting Eqs. 15 and 17 into Eq. 16, we could solve for the css profile within the PZ in the cathode and also the anode in the case of full cell: css,cat(x, \u03a6s,cat) = U \u22121 eq,cat \u0000\u03a6s,cat \u2212\u03a6l(x) \u2212\u03b7cat(x) \u0001 (19) css,an(x, \u03a6s,an) = U \u22121 eq,an \u0000\u03a6s,an \u2212\u03a6l(x) \u2212\u03b7an(x) \u0001 (Full Cell Only) where U \u22121 eq is the inverse function of Ueq(c). The css profile evolves with the terminal potentials during the discharge process. When solid diffusion is facile, css provides a good approximation to the average Li concentration in the particle cs. However, here we also need to consider the situation where css differs significantly from cs due to sluggish diffusion, i.e. when r2 p/Ds,p is much larger than the discharge time. To 11 determine cs, we solve the solid diffusion equation in a spherical particle: \u2202cs \u2202t = Ds,p r2 \u2202 \u2202r \u0012 r2 \u2202cs \u2202r \u0013 (20) with the constant flux boundary condition \u2212Ds,p\u2202cs/\u2202r|r=rp = jin and initial condition cs(r, t = 0) = cs0,p. An analytical solution to the equation exists.35 In particular, the surface Li concentration has the following expression: csol ss,p(t; jin) = cs0,p + jin \u00143t rp + rp 5Ds,p \u22122rp Ds,p \u221e X m=1 \u03bb\u22122 m exp \u0000\u2212\u03bb2 mDs,pt r2 p \u0001\u0015 (21) where \u03bbm is the m\u2212th positive root of the equation tan(\u03bb) = \u03bb. Eq. 21 shows that css gradually deviates from cs0 during discharge. We use the time it takes for css to reach the value specified in Eq. 19 to estimate the amount of Li (de)intercalated into an individual particle: \u2206csol s,p(x, \u03a6s,p) = 3jinTp(css,p(x, \u03a6s,p); jin) rp (22) where Tp(css; jin) is the inverse function of csol ss,p(t; jin) in Eq. 21. The electrode-level DoD could be obtained through the integration of \u2206csol s,p: DoDcat(\u03a6s,cat) = R Lcat Lcat\u2212LPZ \u2206csol s,cat(x, \u03a6s,cat)\u03bdcat(x) dx (csmax,cat \u2212cs0,cat) R Lcat 0 \u03bdcat(x) dx (23) DoDan(\u03a6s,an) = R Lcat+Lsep+Lan Lcat+Lsep \u2206csol s,an(x, \u03a6s,an)\u03bdan(x) dx cs0,an R Lcat 0 \u03bdan(x) dx (Full Cell Only) The inverse functions of Eq. 23, \u03a6s,cat(DoD) and \u03a6s,an(DoD), relate the terminal potentials at the current collectors with DoD. Using them, the cell voltage Uout could be expressed as a function of DoD, which is: Uout(DoD) = \u03a6s,cat(DoD) \u2212\u03b7Li (Hall Cell) (24) Uout(DoD) = \u03a6s,cat(DoD) \u2212\u03a6s,an(DoD) (Full Cell) (25) In Eq. 24, \u03b7Li is the overpotential at the Li metal anode surface and given by 2RT sinh\u22121(I/(2iLi 0 ))/F. Eq. 24 or 25 predicts the discharge voltage curve and also the final DoD (DoDf) upon reaching the cutoff 12 voltage Ucutoff : DoDf = U \u22121 out(Ucutoff ) (26) We briefly summarize the key steps in the development of the URCs model. We start by employing the steady-state electrolyte transport and UR assumptions to determine the electrolyte PZ width LPZ (Eq. 10 or 11) and the distributions of salt concentration cl(x) (Eq. 9) and electrolyte potential \u03a6l(x) (Eq. 15) within the PZ. The calculated \u03a6l(x) is used to determine the OCP of the cathode/anode material (Eq. 16) at a given terminal potential, from which the surface lithium concentration in electrode particles css is obtained (Eq. 19). The amount of Li (de)intercalated in the particles \u2206csol s,p is then estimated from css (Eq. 22) based on the solution to the solid diffusion equation. Finally, the integration of particle-level DoD within the PZ allow us to predict the discharge voltage curve (Eq. 24 and 25) and the discharge capacity (Eq. 26). The URCs model derived above has been implemented in MATLAB. The open source code is available online (see Data Availability). 2.3 Comparison with P2D Simulations The URCs model is benchmarked against P2D simulations in a series of comparative studies. Tests are conducted for both NMC half cells and NMC/Gr full cells with the cathode porosity fixed at 0.25. In full cell configurations, the anode to cathode thickness ratio is chosen to be Lan : Lcat = 1.15 and the anode porosity \u03f5an is set to fix the anode to cathode capacity ratio at 1.1. Cutoff voltage is 3.0V for NMC half cells and 2.8V for NMC/Gr full cells. The electrolyte properties used in this study are based on a LiPF6 in PC/EC/DMC electrolyte reported by Val\u00f8en and Reimers.36 Electrode tortuosity relationships are derived from the study on calendared electrodes conducted by Usseglio-Viretta et al..37 The parameter values used in the URCs and P2D calculations are listed in Table S1. 2.3.1 Mass and potential distributions First, we compare the mass and potential distributions in NMC half cells as solved by the two approaches. Figure 2 shows cl, \u03a6l and css from P2D simulations versus the URCs model for an NMC half cell undergoing 2C discharge. The P2D simulation results are taken from an intermediate state at DoD \u224845%, at which the UR behavior has been established. The URCs model predicts a PZ width LPZ \u224890\u00b5m, placing the PZ/DZ boundary at a distance of \u224860\u00b5m from the current collector (X=0). As shown in Figure 2a, the predicted 13 cl from the URCs model agrees very well with P2D. This validates the steady-steady transport and the UR assumptions that are used to simplify the electrolyte mass conservation equation from the porous electrode theory. Figure 2: Comparison of mass and potential distributions predicted by the URCs model and P2D simulation. Results from the URCs model and the P2D simulation for an NMC half cell with a cathode thickness Lcat = 150\u00b5m and cathode particle size rcat = 4\u00b5m discharged to DoD = 45% at 2C rate: a salt concentration cl, b electrolyte potential \u03a6l and c surface lithium concentration in NMC particles css. The PZ/DZ interface predicted by the URCs model is marked by the vertical dashed line. X is the distance from the cathode current collector. Figure 2b shows that \u03a6l calculated by the URCs and P2D have good agreement inside the PZ. However, \u03a6l diverges to infinity at the PZ/DZ boundary in the URCs model because the salt is assumed to be fully depleted in the DZ. In contrast, \u03a6l exhibits a more gradual transition and extends into the DZ in P2D simulation. Such difference is caused by the fact that cl in P2D is not strictly zero within the DZ and a small reaction flux thus still exists in this region. To handle \u03a6l properly in the calculation, we truncate it below a cutoff value \u03a6cutoff l (horizontal dashed line in Figure 2b), i.e. replacing \u03a6l with max(\u03a6l, \u03a6cutoff l ) in the URCs model. Though the value of \u03a6cutoff l is chosen somewhat arbitrarily, tests show that it has little effect on the URCs predictions of the discharge voltage curve and capacity. Compared to cl and \u03a6l, css predicted by URCs shows more noticeable deviation from the P2D result, see Figure 2c. Unlike the URCs prediction, cathode particles are partially lithiated inside the DZ in the P2D simulation due to the nonvanishing reaction flux. On the other hand, they are less lithiated within the PZ compared to URCs. This relatively large deviation could be attributed to the errors accumulated in the evaluation of \u03a6l and \u03b7, which are used to calculate the distribution of Ueq and then css across the PZ. Besides the error associated with \u03a6l, the UR assumption also introduces inaccuracy in the calculation of \u03b7. 14 Additionally, the exchange current density is approximated as independent of css, which is another source of error. However, as we will demonstrate below, the errors inherent in the predicted css distribution are offset upon summation, which leads to better estimates of the discharge capacity and voltage curve. 2.3.2 Discharge Capacity Next we examine the performance of the URCs model in predicting the discharge rate capability of battery cells. In Figure 3a, we plot the predictions by the URCs model, the single particle model (SPM) and the original Wang-Tang\u2019s UR model against P2D simulations for NMC half cells with Lcat = 120 \u00b5m and variable particle radius (5 \u2013 10 \u00b5m). The UR model (solid black line) predicts a critical C rate Ccrit, at which salt depletion occurs in the electrolyte. Below Ccrit, the PZ spans the entire cathode and the predicted DoDf is always 1 because the electrode particles are assumed to be fully lithiated. Above Ccrit, DoDf decreases below 1 as the PZ width is reduced by sluggish electrolyte transport. On the other hand, the SPM (solid colored lines) considers solid diffusion in active material particles to be the only rate-limiting reaction mechanism. Transport in the electrolyte is assumed to be sufficiently facile and therefore free from salt concentration or potential gradient. As the size of the active material particles increases, solid diffusion becomes slower due to the longer diffusional pathway, resulting in reduced rate performance predicted by the SPM. Figure 3a clearly shows that the UR model overestimates the discharge capacity when the particle size is large and solid diffusion is no longer facile. The SPM also significantly overestimates the rate performance above Ccrit, where salt depletion becomes the predominant factor behind the deteriorating cell performance. In contrast, the URCs model (dashed lines) exhibits excellent agreement with P2D at all the tested particle radii. In particular, the URCs model captures the gradual decrease of DoDf with the C rate below Ccrit, which is caused by solid diffusion alone. It also predicts with good accuracy the precipitous drop of DoDf above Ccrit, which is inflicted jointly by limitations in electrolyte transport and solid diffusion. We note that Ccrit is the same in both the URCs and the UR models because the URCs use the same criteria to predict the onset of salt depletion in electrolyte. Similarly good agreement between URCs and P2D is also seen in NMC/Gr full cells as shown in Figure 3c. The above test confirms that the URCs model combines the UR and SPM models as intended and is able to accurately predict the discharge behavior of battery cells under mixed kinetic control. 15 Figure 3: Predicting discharge rate performance for NMC half cells and NMC/Gr full cells with P2D simulations, the URCs model and other simplified models. a The advantage of regulating mixed reaction kinetics within the URCs model (dashed lines) is highlighted through comparison with the single particle model (solid colored lines) and the UR model (solid black line) on an NMC half cell with fixed Lcat = 120\u00b5m and varying rcat. The URCs model agrees well with the P2D simulations (colored symbols). SPM overlooks the pronounced salt depletion effect at C-rates above Ccrit. The UR model incorrectly assumes facile solid diffusion regardless of particle size. The vertical axis is scaled to offer a better view of the particle size effect at larger DoDf. b The agreement between the discharge capacities predicted by URCs and P2D is retained in NMC half cells across a wide range of Lcat and C-rates. c Normalized discharge capacities predicted by the P2D simulation (colored symbols), UR (solid line) and URCs (dashed lines) models for NMC/Gr full cells with Lcat = 70\u00b5m and different rcat. d Normalized discharge capacities predicted by P2D (colored symbols) and URCs (dashed lines) for NMC/Gr full cells with rcat = 4 \u00b5m and different cathode thickness. 16 As a further test, we compare the URCs predictions with the P2D simulations over a wide range of the electrode thickness (70 \u2013 300 \u00b5m) in half and full cells in Figure 3b and 3d, respectively. A high-level of agreement is seen in half cell configurations and when DoDf > 30% for full cells. At lower DoDf, however, the URCs model tends to underestimate the discharge capacity of the full cells, especially when electrodes are thick. Such discrepancy is also observed in the comparison between the UR model and P2D simulations.29 It is due to the hybrid reaction behavior exhibited by the graphite anode. While the URCs model assumes graphite to behave as an UR-type electrode, the OCP curve of graphite consists of both sloped and flat segments, which correspond to the solid solution and two-phase coexistence regions, respectively. During discharge, the lithiated graphite anode goes through the I-II staging transition at DoD< 30%, in which it displays the MZR behavior. This explains the relatively large error of the URCs model in the low DoDf regime. As the discharge process progresses to higher DoD, the graphite\u2019s OCP curve enters a more sloped region. Its reaction behavior accordingly becomes more UR-like and the agreement between the URCs and P2D thus improves. When applying the URCs model to optimizing the full cell configuration, its relatively poor prediction accuracy at low DoDf is not a practical concern because the optimization objective is to identify cell parameters that achieve high DoDf, where the URCs model performs well. 2.3.3 Discharge Voltage Curves and Energy Output Prediction of the voltage curves upon galvanostatic discharging based on Eq. 24 and 25 is an added capability of the URCs model. In Figures 4a and 4c, we compare the voltage curve predictions between P2D simulations and the URCs model for a half and full cell with a cathode thickness Lcat of 70\u00b5m at several C rates. The agreement between the two is very good for the half cell, even at high rates up to 10C. The qualitative features of the voltage curves are well captured by the URCs model. For the full cell, the agreement is less satisfactory at relatively high rates, where DoDf is low. The underlying reason is similar to the trend seen in the discharge capacity prediction and caused by the hybrid reaction behavior of the graphite anode. The areal discharge energy of the battery cell is given by the area underneath the discharge voltage curve: EA = Q0 Z DoDf 0 V dDoD where Q0 is the areal capacity of the cathode. In Figure 4b and 4d, we compare the cell-level specific energy Ew predicted by the URCs model, which is calculated from EA using the cell parameters in Table S2, with the P2D results for NMC/Li and NMC/Gr cells across a range of cathode thickness. Overall, the URCs-predicted discharge energy displays a comparable level of agreement with P2D as the discharge capacity shown in Figure 3b and 3d. 17 Figure 4: Predicting discharge voltage curves and specific energy output of NMC half cells and NMC/Gr full cells with the URCs model. a URCs-predicted discharge voltage curves (dashed lines) vs P2D results (solid lines) for an NMC half cell with Lcat = 70\u00b5m and rcat = 4\u00b5m. b URCs-predicted cell-level specific energy Ew (dashed lines) in comparison to P2D simulations (colored symbols) for NMC half cells with rcat = 4 \u00b5m and different cathode thickness. The URCs model reveals the Pareto front (solid line) that facilitates battery system design decisions involving trade-offs between energy and power outputs. The vertical axis is scaled to offer a better view of the upper range of Ew where crossovers occur. c Discharge voltage curves predicted by URCs (dashed lines) and P2D (solid lines) for an NMC/Gr full cell with Lcat = 70\u00b5m and rcat = 4\u00b5m. d Ew predicted by URCs (dashed lines) versus P2D (colored symbols) for NMC/Gr cells with rcat = 4 \u00b5m and different cathode thickness. 18 Figure 4b and 4d are analogous to the Ragone plot38 and reveal the classic trade-off between the energy and power outputs of battery cells. The Ew vs C rate curves display a typical \u201cknee point\u201d, beyond which salt depletion occurs in the electrolyte and the discharge energy drops precipitously. While cells with thicker electrodes boast higher maximum Ew, thanks to a smaller weight fraction of inactive components in the cells, their discharge energies decrease more rapidly with the discharge rate and exhibit a crossover with those with thinner electrodes, resulting in inferior performance under high power demand. The envelop of these curves, or the Pareto front, inform the highest achievable Ew at a given C rate and the corresponding electrode thickness. We see that the URCs model is able to accurately predict the Pareto front and thus guide the selection of the optimal electrode thickness for a given power requirement with other cell parameters fixed. In the following sections, we apply the URCs model to battery cell optimization in more complex scenarios where two or more cell parameters are adjustable. 2.4 Optimizing Battery Cells with the URCs Model 2.4.1 Grid-space Search In this section, we set out to optimize battery cell parameters with the URCs model and benchmark its accuracy and efficiency against the P2D simulations. As a first test, we use grid-space search to maximize the cell-level specific capacity Qw or specific energy Ew at 1C discharge against cathode thickness Lcat and porosity \u03f5cat for NMC/Li and NMC/Gr cells. The cathode particle radius is assumed to be 4\u00b5m. As in the previous section, the Lan:Lcat ratio is fixed at 1.15 in NMC/Gr full cells, and anode porosity \u03f5an is set to maintain an anode to cathode capacity ratio of 1.1. Other cell component information are listed in Table S2. When using P2D simulation for the optimization, we conduct the search with a two-level resolution because of its high computational cost. The Lcat\u2013\u03f5cat space is first scanned on a coarse grid, which samples 15 values of Lcat between 50\u00b5m and 400\u00b5m with a increment of 25\u00b5m and 19 values of \u03f5cat between 0.15 and 0.60 with an increment of 0.025. Subsequently, a refined search is performed out on a finer grid in the neighborhood of the (Lcat, \u03f5cat) grid point that maximizes Qw or Ew. The search region is within \u00b125\u00b5m and \u00b10.05 around the approximate optimum Lcat and \u03f5cat, respectively, and the grid spacings are (2\u00b5m, 0.005). We use this fine scan to identify the global optimal cathode thickness Lopt cat and porosity \u03f5opt cat. A total of 831 simulations (285 coarse scan and 546 refined scan) are carried out. When using the URCs model in the grid-space search, we scan the same region of the parameter space with higher resolution thanks to the model\u2019s computation 19 efficiency. 1000 uniformly spaced values of Lcat and \u03f5cat each are sampled in a total of 106 calculations. The optimal cell configuration predicted by the URCs model is denoted as Lopt* cat and \u03f5opt* cat . Figure 5: Optimization of NMC/Li and NMC/Gr full cells to maximize the cell level specific capacity Qw and specific energy Ew at 1C discharge with grid-space search. a Global optimal Qw for an NMC half cell determined from two-step grid-space search with P2D simulations (black square) and high-resolution grid-space scan with the URCs model (red circle), overlaid on URCsgenerated contour plot on the Lcat-\u03f5cat plane. b The URCs model correctly predicts the upper limit of Qw (dashed envelope) achievable at each Lcat. P2D simulations with varying \u03f5cat are indicated by colored square symbols. c Global optima for Ew on the Lcat-\u03f5cat plane of an NMC half cell, determined by P2D (black square) and URCs (red circle) using the same grid-space search strategy as for Qw optimization. d, e, and f are analogous to a, b, and c but for NMC/Gr full cells. Figure 5a and 5d present the contour plots of the URCs-generated Qw from the grid-space search and also mark the locations of the optimal cell configurations identified by P2D (black square) and URCs (red circle). For NMC half cells, the optimal cell configuration from P2D simulations is (Lopt cat, \u03f5opt cat) = (163\u00b5m, 0.31), and Qopt w = 109.8 mAh/g. The URCs model predicts (Lopt* cat , \u03f5opt* cat ) = (187.9\u00b5m, 0.285) and Qopt* w = 113.6 mAh/g, which deviate from the P2D results by 15.2%, 8.2% and 3.5%, respectively. For NMC/Gr full cells, we obtain (Lopt cat, \u03f5opt cat) = (82\u00b5m, 0.225) and Qopt w = 69.1 mAh/g from P2D and (Lopt* cat , \u03f5opt* cat ) = (88.2\u00b5m, 0.216) and Qopt* w = 70.6 mAh/g from URCs. The relative differences are less than 7.6%, 4.0% and 2.2%, respectively. 20 The optimal parameters found by URCs show good agreement with the P2D simulations, with the half cell predictions having slightly larger errors than full cells. In Figure 5b and 5e, we visualize the P2D calculations of Qw from the coarse grid search by grouping them according to cathode thickness Lcat for half and full cells, respectively. Additionally, the maximum obtainable Qw for each Lcat is calculated from the URCs model and plotted as a dashed line. It can be seen that the URCs model well captures the upper limit of Qw for both types of cells, with the exception of full cell configurations with large Lcat, for which the URCs model underestimates Qw. It is noticeable that the upper envelope of Qw is flatter and varies more gradually with Lcat for half cells than for full cells. This explains why the URCs model has a relatively large error in Lopt cat for half cells as their Qw is not as sensitive to Lcat as in full cells, which is also reflected by the sparser contour lines of the half cells (Figure 5a) compared with the full cells (Figure 5d). Figure 5c and 5f show the URCs calculations of Ew as a function of Lcat and \u03f5cat for half and full cells, respectively. The optimal cell configurations that maximize Ew as determined by URCs and P2D are also identified in the plots. Compared to Qw, the URCs model performs even better when used to optimize against Ew. For half cells, it differs from P2D by 6.2%, 9.9% and 1.3% in the predictions of Lcat, \u03f5cat and Eopt w , respectively, and the relative differences further reduce to 4.1%, 4.2% and 1.2% for full cells. The cathode thickness optimized for Ew is slightly lower than that optimized for Qw. This is because increasing electrode thickness not only reduces the salt PZ width but also increases the cell resistance, which pushes the maximum Ew towards lower Lcat. While the URCs model could be used to find optimal battery configurations with good accuracy, its computation efficiency is far more superior than the P2D simulation. We benchmarked their performance on a Windows laptop (2.3 GHz Intel Core i7 processor, 16 GB RAM). In the grid-space search, it takes the URCs model \u223c16 milliseconds on average to complete a calculation. By comparison, a P2D simulation implemented in commercial software COMSOL requires an average of 432 seconds to complete. We also pit URCs against a state-of-the-art fast P2D solver PyBaMM.39 When applied to the same grid search, PyBaMM consumes \u223c10.2 seconds per simulation on average, which is more than 600 folds than the running time of the URCs model. The cathode/separator/anode stack is discretized by 100/20/115 grid points and the electrode particles by 20 points, which are typical for P2D simulations. 21 2.4.2 Gradient-based Optimization While we use the grid-space search to illustrate the computation efficiency of the URCs model in the last section, a perhaps even more notable advantage of the URCs model over P2D simulation lies in its compatibility with the gradient-based optimization methods, a family of the most widely used algorithms for finding the optimum of the objective function. In our test, we find that P2D simulation performs poorly with the gradient-based methods, which are usually unable to find the optimal cell parameters. In previous studies,40 derivative-free optimization algorithms were instead employed in conjunction with P2D despite the need for significantly more objective function evaluations than the gradient-based methods. P2D\u2019s unsatisfactory performance when used in optimization most likely stems from its inherent numerical errors, which generates inaccurate estimates of the first derivatives that cause the gradient-based search to fail.41 As illustrated in Figure 6a, P2D simulation tends to produce non-smooth objective function due to the errors introduced by the discretization of the system and the differential algebraic equation solver. This causes its derivatives to be ill-behaved and the search prone to divergence or being trapped in fictitious local minima. On the other hand, the analytical nature of the URCs model allows the objective function and its derivatives to be evaluated at much higher precision, making it excel in gradient-based optimization. As a demonstration, we repeat the task of maximizing Qw at 1C discharge against Lcat and \u03f5cat for half cells with rcat = 4\u00b5m by using the gradient-based methods implemented in MATLAB\u2019s fmincon function, which solves constrained optimization problems. In the optimization process, the upper and lower bounds for Lcat and \u03f5cat are set to be [50\u00b5m, 400\u00b5m] and [0.15, 0.6], respectively, and we let the search start at nine different initial guesses from the different combinations of Lcat \u2208{100\u00b5m, 225\u00b5m, 350\u00b5m} and \u03f5cat \u2208{0.2, 0.35, 0.5}. Figure 6b shows the initial guesses (triangular symbols), the successive steps (solid lines) and the final outcomes (cross symbols) of the search when P2D and the default optimization method (interior-point) and tolerance setting in fmincon are used. It can be seen that the optimization results from different initial guesses (triangular symbols) are all scattered in the parameter space and none of them is close to the global optimal configuration (black square) determined by the grid-space search. The optimization process is prone to premature termination as a result of being trapped in local minima. For example, Figure 6c shows that the trial that begins at (Lcat, \u03f5cat) = (100\u00b5m, 0.35) ends in just three steps. 22 Figure 6: Optimization of NMC half cells to maximize the cell level specific capacity Qw at 1C discharge with the URCs model and the gradient-based optimization method. a Schematic of the different behaviors of the P2D simulation and the URCs model when used in gradientbased optimization. Numerical errors in P2D result in non-smooth objective function, leading to discontinuous first derivatives and failure to find the global optimum. The objective function evaluated by URCs is smoother, which enables the optimization to converge. b P2D-based optimization trajectories (solid lines) on the Lcat-\u03f5cat plane, superimposed on the contour plot of Qw generated by the URCs model. Searches from different initial guesses (triangle symbols) terminate at different endpoints (cross symbols) away from the global optimum departmentlocated by the P2D-based grid-space search (black square). c Improvement of Qw during the optimization iteration process using P2D (black line) and URCs (red line). Initial guess is Lcat = 100\u00b5m and \u03f5cat = 0.35. The black and the red dashed lines represent the global maximum Qw from the grid-space search calculated by P2D and URCs, respectively. d URCs-based optimization trajectories (solid lines) show that searches from different initial guesses (triangle symbols) converge to the same global URCs optimum (red circle). 23 In contrast, the URCs model fares considerably better in the same task. As shown in 6d, searches from all 9 starting points successfully converge to the true optimum identified by the grid-space search. Figure 6c highlights the high efficiency of optimization with URCs. With the initial guess (Lcat, \u03f5cat) = (100\u00b5m, 0.35), the solver approaches the neighborhood of the optimum in just 5 iterations and converges after another 6 iterations. On average, the URCs model is evaluated for 85 times per optimization attempt (note that multiple function calls are made in each iteration). Using other gradient-based algorithms available in fmincon (e.g. sequential quadratic programming) yields similar performance. We note that the optimization outcomes show little sensitivity to the selected algorithm and the bounds of independent variables with discrepancy within 1\u2030. As a comparison, we also test the performance of combining the URCs model with a derivative-free method, the pattern search algorithm, which was used to optimize battery cell configuration with P2D simulations by Dai and Srinivasan.40 The algorithm is implemented in MATLAB\u2019s patternsearch function, and the default solver setting is used except raising the maximum iterations and function calls. An average of 4997 objective function evaluations are needed by the algorithm to find the optimum, which is 59 times of what is required by a gradient-based method. 2.4.3 Hybrid Optimization Scheme To take advantage of the speed of the URCs model and further improve the prediction accuracy, we propose a hybrid approach to optimizing battery cell configurations. The idea is straightforward. Gradient-based optimization is first applied with the URCs model to quickly bring the search near the global optimum. A parameter sweep using P2D simulations is then carried out in the local neighborhood to locate the optimal parameters more accurately. We use the URCs to effectively reduce the domain size for P2D-based grid-space search, which is timeand resource-consuming. Based on the relative difference between the URCs and P2D results revealed in our test, it is reasonable to set the search domain within c.a. \u00b115% of the URCs-predicted optimal parameter values. To test the proposed hybrid scheme, we optimize the cathode thickness Lcat, cathode porosity \u03f5cat and anode porosity \u03f5an of an NMC/Gr full cell to maximize its specific capacity Qw at 1C discharge. The anode thickness Lan is adjusted accordingly to give an anode:cathode capacity ratio of 1.1. Because of the increased number of design variables, performing a global grid-space search similar to Sec. 2.4.1 would require over 15000 P2D simulations (5415 coarse scan and 11466 refined scan), which are highly expensive computation-wise. Using the hybrid approach instead, we let the gradient-based search start at an arbitrary 24 Figure 7: Optimization of NMC/Gr full cells to maximize the cell level specific capacity Qw at 1C discharge with hybrid optimization scheme. The gradient-based optimization algorithm (trajectory indicated with the solid line) successfully converges to the optimum predicted by the URCs model (red circle), originating from an arbitrary initial guess (triangle symbol) of Lcat = 150\u00b5m, \u03f5cat = 0.35, and \u03f5an = 0.35. A local grid-space search is performed around the URCs-estimated optimum (search region indicated with blue boxes) using P2D simulations. The local optimum predicted by P2D (black square) is subsequently verified to be the global optimum. a, b, and c illustrate the hybrid optimization scheme on the Lcat-\u03f5cat, Lcat-\u03f5an, and \u03f5cat-\u03f5an planes, respectively. For each plot, the variable held constant maintains the same value as that of the P2D-generated global optimum. initial guess (Lcat, \u03f5cat, \u03f5an) = (150\u00b5m, 0.35, 0.35). It takes only 101 function evaluations and a few seconds of computation time for the optimizer to find the URCs optimum at (Lopt* cat , \u03f5opt* cat , \u03f5opt* an ) = (91.5\u00b5m, 0.247, 0.276) and Qopt\u2217 w = 71.39 mAh/g. The optimization trajectory in the parameter space is visualized in Figure 7a, b, and c, each illustrating a cross-sectional plane formed by two of the three independent parameters. The subsequent P2D scan is carried out in the neighborhood (blue boxes in Figure 7) of Lcat \u2208(77.8\u00b5m, 105.3\u00b5m), \u03f5cat \u2208(0.210, 0.284) and \u03f5an \u2208(0.235, 0.318) with step size 2\u00b5m for Lcat and 0.01 for \u03f5cat and \u03f5an, resulting in a total of 728 simulations. The optimal configuration predicted by P2D is located at (Lopt cat, \u03f5opt cat, \u03f5opt an ) = (82.5\u00b5m, 0.245, 0.29) with Qopt w = 69.79 mAh/g, which differ from the URCs optimum by 10.91%, 0.98%, 4.73% and 2.3%, respectively. The consistently good agreement between the URCs and P2D predictions even as the complexity of the optimization problem increases is the reason that we could limit the P2D-based search to a narrow region surrounding the URCs optimum in the parameter space to significantly reduce the computation cost without compromising the prediction accuracy. 25 3 Conclusion In this study, we present a physics-based analytical model (the URCs model) for facile prediction of the battery cell performance under mixed kinetic control of electrolyte transport and solid-state diffusion. It is suitable for electrode materials such as NMC that remain as solid solution during lithium (de)intercalation. The URCs model simplifies the porous electrode theory by assuming pseudo-steady-state electrolyte transport and a uniform reaction distribution within the salt penetration zone. As a result, analytical expressions of the salt concentration and electrical potential distributions in the electrolyte could be derived, which are coupled to the analytical solution of the lithium solid diffusion equation in electrode particles to determine DoD as a function of the cell voltage. The URCs model exhibits very good agreement with the P2D simulations in predicting the discharge capacity, voltage curve and specific energy of battery cells though relatively large discrepancy is observed for NMC/Gr full cells at high rates because of the hybrid reaction behavior of the graphite anode. The power of the model in battery cell optimization is demonstrated. Its high computational speed enables the model to quickly scan the design variable space to reveal their effect on performance, which is very time-consuming for P2D simulations. While P2D cannot reliably find the global optimum when used in gradient-based optimization, the analytical nature of the URCs model makes it highly compatible with such optimization methods. We suggest that the URCs model could be combined with P2D simulations in a hybrid scheme to optimize the battery structure with both efficiency and accruacy. Overall, the light weight and versatility of the URCs model ready it for battery design tasks and real-time onboard applications. Methods P2D Simulation The P2D simulations based on the porous electrode theory have been extensively discussed in various literature sources.15\u201318 In this context, we offer a summary of the equations used in the P2D simulations, several of which have been previously presented in the derivation of the URCs model. Mass balance and current continuity in the electrolyte are described by Equations 1 and 2, respectively. Current continuity in the solid phase is given by \u2207\u00b7 \u20d7 is(x) = Fap(x)jin(x) (27) 26 The intercalating flux at solid particle surface jin is governed by the Butler-Volmer equation: jin = i0 F \u0014 exp \u0012 \u2212\u03b1F\u03b7 RT \u0013 \u2212exp \u0012\u03b1F\u03b7 RT \u0013\u0015 (28) where \u03b1 is the charge transfer coefficient. The expressions for the surface overpotential \u03b7 and the exchange current density i0 are provided in Equations 16 and 18, respectively. Within the solid phase, current density and electrical potential are correlated by electrical conductivity \u03c3: \u20d7 is(x) = \u2212\u03c3\u2207\u03a6s (29) The relationship between the effective current density and potential in the electrolyte is expressed by Equation 12. Solid-state diffusion of Li in the active material is modeled as radial diffusion in spherical particles as described by Equation 20. The P2D simulations that generated the figure data were implemented in COMSOL Multiphysics version 5.6. PyBaMM was also used for benchmarking purposes.39 Values for the cell parameters used in the simulations are listed in Table S1. Data availability The MATLAB code that implements the URCs model is available at https://github.com/mingtang01/URCsBatteryModel 27 List of Symbols Used acat/aan Volumetric surface area of cathode/anode [m-1] C C rate Ccrit Critical C rate cl Salt concentration in electrolyte [mol\u00b7m-3] cl0 Initial salt concentration in electrolyte [mol\u00b7m-3] cs Li concentration in electrode particles [mol\u00b7m-3] cs Average Li concentration in electrode particles [mol\u00b7m-3] cs0 (cat/an) Initial Li concentration in electrode (cathode/anode) particles [mol\u00b7m-3] csmax (cat/an) Maximum Li concentration in electrode (cathode/anode) particles [mol\u00b7m-3] css (cat/an) Li concentration on electrode (cathode/anode) particle surface [mol\u00b7m-3] \u2206cs (cat/an) Amount of Li (de)intercalated into an electrode (cathode/anode) particle Damb Ambipolar diffusivity of electrolyte [m2\u00b7s-1] Ds (cat/an) Li diffusivity in active material [m2\u00b7s-1] DoD (cat/an) Depth of discharge (cathode/anode) DoDf Final depth of discharge EA Areal discharge energy [Wh/m2] Ew Cell-level specific energy [Wh/kg] F Faraday constant (96485 C\u00b7mol-1) I Applied current density [A\u00b7m-2] \u20d7 i Current density in liquid phase [A\u00b7m-2] \u20d7 is Current density in solid phase [A\u00b7m-2] i0 Exchange current density of active material [A\u00b7m-2] iLi 0 Exchange current density on Li anode [A\u00b7m-2] jin Reaction flux on active material surface [mol\u00b7m-2\u00b7s-1] k0 Reaction rate constant [mol\u00b7m-2\u00b7s-1\u00b7(mol\u00b7m-3)-1.5] Lcat/Lsep/Lan Cathode/separator/anode thickness [m] LPZ Salt penetration zone thickness [m] Q0 Areal capacity of the cathode [mAh/m2] Qw Cell-level specific capacity [mAh/g] R Gas constant (8.314 J\u00b7mol-1\u00b7K-1) r Spatial coordinate in electrode particle radial direction [m] rcat/ran Cathode/anode particle radius [m] SoC State of charge T Temperature [298 K] t Time [s] t+ Cation transference number in electrolyte Ueq (cat/an) Equilibrium open-circuit potential of active material (cathode/anode) [V] X/x Spatial coordinate in electrode thickness direction [m] 1 + \u2202ln f\u00b1 \u2202ln cl Thermodynamic factor \u03b1 Charge transfer coefficient \u03f5cat/\u03f5sep/\u03f5an Cathode/separator/anode porosity \u03b7 (cat/an/Li) (Cathode/anode/Li metal) overpotential [V] \u03ba Ionic conductivity [S\u00b7m-1] \u03bdcat/\u03bdan Volumetric fraction of active material in cathode/anode \u03c3 Electrical conductivity [S\u00b7m-1] \u03c4cat/\u03c4sep/\u03c4an Cathode/separator/anode tortuosity \u03a6l/\u03a6s (cat/an) Electrolyte/solid phase (cathode/anode) potential [V] 28 Acknowledgments H.W. is supported by DOE under project number DE-EE0006250 and Shell International Exploration and Production Inc. F.W. and M.T. acknowledge support from DOE Office of Basic Energy Sciences under project number DE-SC0019111. Simulations were partially performed on computing clusters at the Texas Advanced Computing Center (TACC) at the University of Texas. Author Contributions M.T. conceived and supervised the research. H.W., F.W. and M.T. performed theoretical analysis and numerical calculations, discussed the results and wrote the manuscript. Conflicts of Interest The authors declare no competing financial interest." + }, + { + "url": "http://arxiv.org/abs/2404.12314v1", + "title": "Guided Discrete Diffusion for Electronic Health Record Generation", + "abstract": "Electronic health records (EHRs) are a pivotal data source that enables\nnumerous applications in computational medicine, e.g., disease progression\nprediction, clinical trial design, and health economics and outcomes research.\nDespite wide usability, their sensitive nature raises privacy and\nconfidentially concerns, which limit potential use cases. To tackle these\nchallenges, we explore the use of generative models to synthesize artificial,\nyet realistic EHRs. While diffusion-based methods have recently demonstrated\nstate-of-the-art performance in generating other data modalities and overcome\nthe training instability and mode collapse issues that plague previous\nGAN-based approaches, their applications in EHR generation remain\nunderexplored. The discrete nature of tabular medical code data in EHRs poses\nchallenges for high-quality data generation, especially for continuous\ndiffusion models. To this end, we introduce a novel tabular EHR generation\nmethod, EHR-D3PM, which enables both unconditional and conditional generation\nusing the discrete diffusion model. Our experiments demonstrate that EHR-D3PM\nsignificantly outperforms existing generative baselines on comprehensive\nfidelity and utility metrics while maintaining less membership vulnerability\nrisks. Furthermore, we show EHR-D3PM is effective as a data augmentation method\nand enhances performance on downstream tasks when combined with real data.", + "authors": "Zixiang Chen, Jun Han, Yongqian Li, Yiwen Kou, Eran Halperin, Robert E. Tillman, Quanquan Gu", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Electronic health records (EHRs) are a rich and comprehensive data source, enabling numerous applications in computational medicine including the development of models for disease progression prediction and clinical event medical models (Li et al., 2020; Rajkomar et al., 2018), clinical trial design (Bartlett et al., 2019) and health economics and outcome research (Padula et al., 2022). In particular, many existing disease prediction models primarily utilize tabular formats, often transforming longitudinal EHR data into binary or categorical forms, rather than employing time- series forecasting methods (Lee et al., 2022; Huang et al., 2021; Rao et al., 2023; Debal and Sitote, \u2217Equal contribution \u2020Department of Computer Science, University of California, Los Angeles; e-mail: chenzx19@cs.ucla.edu \u2021Optum AI; e-mail: jun_han@optum.com \u00a7Department of Computer Science, University of California, Los Angeles; e-mail: yongqianl@cs.ucla.edu \u00b6Department of Computer Science, University of California, Los Angeles; e-mail: evankou@cs.ucla.edu \u2016Optum AI; e-mail: eran.halperin@uhg.com \u2217\u2217Optum AI; e-mail: rob.tillman@optum.com \u2020\u2020Department of Computer Science, University of California, Los Angeles; e-mail: qgu@cs.ucla.edu 1 arXiv:2404.12314v1 [cs.LG] 18 Apr 2024 2022). However, the sensitive nature of EHRs, which includes confidential medical data, poses challenges for their broad use due to privacy concerns and patient confidentiality requirements (Hodge Jr et al., 1999). In addition to these concerns, data scarcity also restricts their potential use in applications for rare medical conditions. To address these challenges, we consider using generative models to synthesize artificial, but realistic EHRs, which has recently emerged as a crucial research area for advancing applications of machine learning to healthcare and other industries with privacy and data scarcity challenges. The primary goal of synthetic EHR generation is to generate data that is (i) indistinguishable from real data to an expert, but (ii) not attributable to any actual patients. Recent advancements in deep generative models, including Variational Autoencoders (VAE) (Vincent et al., 2008) and Generative Adversarial Networks (GAN) (Goodfellow et al., 2014), have demonstrated significant promise in generating realistic synthetic EHR data (Biswal et al., 2021; Choi et al., 2017a). In particular, GAN-based EHR generation has emerged as the most predominant and popular approach (Choi et al., 2017a; Zhang et al., 2020; Torfi and Fox, 2020a), and achieved state-of-the-art performance in terms of quality and privacy preservation. However, the unstable training process of GAN-based methods can lead to mode collapse, raising concerns about their widespread application. Recently, diffusion-based generative models, initially introduced by Sohl-Dickstein et al. (2015), have demonstrated impressive capabilities in generating high-quality samples in various domains, including images (Ho et al., 2020; Song and Ermon, 2020), audio (Chen et al., 2020; Kong et al., 2020), and text (Hoogeboom et al., 2021b; Austin et al., 2021; Chen et al., 2023). A diffusion model consists of a forward process, which gradually transforms training data into pure noise, and a reverse sampling process that reconstructs data from noise using a learned network. Compared to GANs, their training is more stable as it only involves maximizing the log-likelihood of a single neural network. Due to the superior performance of diffusion models, recent methods have explored their ap- plication in generating categorical EHR data (Yuan et al., 2023; Ceritli et al., 2023). While these approaches demonstrate promising performance, their improvement over previous GAN-based meth- ods is varied. Particularly, they struggle to generate EHR records with rare medical conditions at rates consistent with the occurrence of such conditions in real-world data. Furthermore, existing approaches offer limited support for conditional generation, which is crucial for many downstream tasks such as disease classification. In this paper, we propose a novel EHR generation method that utilizes discrete diffusion (Sohl- Dickstein et al., 2015; Hoogeboom et al., 2021b; Austin et al., 2021; Chen et al., 2023), a type of diffusion process tailored for discrete data sampling, as well as a flexible conditional sampling method that does not require additional model training. Our contributions are summarized as follows: \u2022 We introduce a Discrete Denoising Diffusion model specifically tailed for generation of tabular medical codes in EHRs, dubbed EHR-D3PM. Our method incorporates an architecture that effectively captures feature correlations, enhancing the generation process and achieving state-of-the-art performance. Notably, EHR-D3PM excels in generating instances of rare conditions, an aspect where existing methods often face challenges. \u2022 We further extend EHR-D3PM to conditional generation, specifically tailored for generating EHR samples related to particular medical conditions. Given the unique requirements of this task and the discrete nature of EHR data, we have custom-designed the energy function and applied energy-guided Langevin dynamics at the latent layer of the predictor network to achieve this goal. \u2022 We investigate the effectiveness of EHR-D3PM as a data augmentation method in downstream tasks. 2 We show that synthetic EHR data generated by EHR-D3PM yields comparable performance to that of real data in terms of AUPRC and AUROC when used to train predictive models and when combined with the real data, EHR-D3PM can enhance the performance of predictive models. Notation. We use the symbol q to denote the real distribution in a diffusion process, while p\u03b8 represents the distribution parameterized by the NN during sampling. With its success probability inside the parentheses, the Bernoulli distribution is denoted by Bernoulli(\u00b7). We further use Cat(p) to denote a categorical distribution over a one-hot row vector with probabilities given by the row vector p.", + "main_content": "EHR Synthesis. Various methods have been developed for generating synthetic EHR data. Buczak et al. (2010) proposed an early data-driven approach for creating synthetic EHRs, but their approach offers limited flexibility has privacy concerns. Recently, GANs have become prominent in EHR generation, including medGAN Choi et al. (2017b), medBGAN (Baowaly et al., 2018), EHRWGAN (Zhang et al., 2019), and CorGAN Torfi and Fox (2020b). GAN-based methods offer significant improvement in the quality of synthetic EHRs, but often face issues related to training instability and mode collapse (Thanh-Tung et al., 2018), restricting their wide use and the diversity of generated data. To address this, other methods, including variational auto-encoders (Biswal et al., 2020) and language models (Wang and Sun, 2022), have been explored. Very recently, MedDiff (He et al., 2023) and EHRdiff (Yuan et al., 2023) considered using diffusion models and proposed sampling techniques for high-quality EHR generation. Ceritli et al. (2023) further extended the diffusion model to mixed-type EHRs. In this paper, we focus on developing a guided discrete diffusion model specifically designed for generating tabular medical codes in EHRs and improving the generation of codes for rare conditions, which previous methods struggle at. Discrete Diffusion Models. The study of discrete diffusion models was pioneered by Sohl-Dickstein et al. (2015), which explored diffusion processes in binary random variables. The approach was further developed by Ho et al. (2020); Song et al. (2020), which incorporates categorical random variables using transition matrices with uniform probabilities. Subsequently, Austin et al. (2021) introduced a generalized framework named Discrete Denoising Diffusion Probabilistic Models (D3PMs) for categorical random variables, effectively combining discrete diffusion models with Masked Language Models (MLMs). Recent advancements in this field include the introduction of editing-based operations (Jolicoeur-Martineau et al., 2021; Reid et al., 2022), auto-regressive diffusion models (Hoogeboom et al., 2021a; Ye et al., 2023), a continuous-time structure (Campbell et al., 2022), strides in generation acceleration (Chen et al., 2023), and the application of neural network analogs for learning purposes (Sun et al., 2022). In this paper, we focus on the D3PMs with multinomial distribution. 3 Background In this section, we provide background on diffusion models. Diffusion Model. Given x0 drawn from a target data distribution following qdata(\u00b7), the forward process is a Markov process that maps the clean data x0 to a noisy sample from a prior distribution 3 qnoise(\u00b7). The process x0 \u2192xT is composed of the conditional distributions q(xt|xt\u22121, x0) where q(x1:T |x0) = T Y t=1 q(xt|xt\u22121, x0). (1) By Bayes rule, (1) induces a reverse process xT \u2192x0 that can convert samples from the prior qnoise into samples from the target distribution qdata, q(xt\u22121|xt, x0) = q(xt|xt\u22121, x0)q(xt\u22121|x0) q(xt|x0) . (2) After training a diffusion model, the reverse process can be used for synthetic data generation by sampling from the noise distribution qnoise and repeatedly applying a learnt predictor (neural network) p\u03b8(\u00b7|xt) parameterized by \u03b8: p\u03b8(xT ) = qnoise(xT ) p\u03b8(xt\u22121|xt) = Z b x0 q(xt\u22121|xt, b x0)p\u03b8(b x0|xt)db x0. (3) Training Objective. The neural network p\u03b8(\u00b7|xt) in (3) that predicts b x0 is trained by maximizing the evidence lower bound (ELBO) (Sohl-Dickstein et al., 2015), log p\u03b8(x0) \u2265Eq(x1:T |x0) h log p(x0:T ) q(x1:T |x0) i dx1:T = Eq(x1|x0)[log p\u03b8(x0|x1)] \u2212 T X t=2 Eq(xt|x0)[KL(q(xt\u22121|xt, x0)\u2225p\u03b8(xt\u22121|xt)) \u2212Eq(xT |x0)KL(q(xT |x0)\u2225p\u03b8(xT )), Here KL denotes Kullback-Liebler divergence and the last term Eq(xT |x0)KL(q(xT |x0)\u2225qnoise(xT )) equals or approximately equals zero if the diffusion process q is properly designed. Different choices of diffusion process (1) and (2) will result in different sampling methods (3). There are two popular approaches to constructing a diffusion generative model, depending on the nature of the process. Gaussian Diffusion Process. The Gaussian diffusion process assumes a Gaussian noise distribution qnoise. In particular, the prior is chosen to be qnoise = N(0, I), and the forward process is characterized by q(xt|xt\u22121, x0) = N(xt; p 1 \u2212\u03b2txt\u22121, \u03b2tI), where \u03b2t is the variance schedule determined by a pre-specified corruption schedule. The Gaussian diffusion process has achieved great success in continuous-valued applications like image generation (Ho et al., 2020; Song et al., 2020). Recently, it has been applied to tabular EHR data generation (He et al., 2023; Yuan et al., 2023). Discrete Diffusion Process. Discrete Denoising Diffusion Probabilistic Models (D3PMs) is designed to generate categorical data from a vocabulary {1, . . . , K}, represented as a one-hot vector 4 x \u2208{0, 1}K. The noise follows a categorical distribution qnoise. The Multinomial distribution (Hoogeboom et al., 2021b) is among the most effective noise distributions. In particular, qnoise is chosen to be a uniform distribution over the one-hot basis of the vocabulary {e1, . . . , eK}, and the forward process is characterized by q(xt|xt\u22121, x0) = Cat \u0000xt; \u03b2txt\u22121 + (1 \u2212\u03b2t)qnoise \u0001 , where Cat is the categorical distribution and \u03b2t is the variance schedule determined by a pre-specified corruption schedule. Due to its discrete nature, D3PM is widely used to generate categorical data like text (Hoogeboom et al., 2021b; Austin et al., 2021) and categorical tabular data Kotelnikov et al. (2023); Ceritli et al. (2023). This paper uses a D3PM with a multinomial noise distribution to generate tabular medical codes in EHRs. 4 Method In this section, we formalize the problem of tabular EHR data generation and provide the technical details of our method. 4.1 Problem Formulation We consider medical coding data in EHRs, such as ICD codes, which are standardized codes published by the World Health Organization that correspond to specific medical diagnoses and procedures (Slee, 1978). While we focus on ICD codes specifically, our approach can be used with other medical coding data, e.g., CPT, NDC and LOINC codes. For a given (usually high dimensional) set \u2126of ICD codes of interest, we encode the set as N := |\u2126| categories {1, 2, . . . , N}. A sample patient EHR x is then encoded as a sequence of N tokens x = [x(1), . . . , x(N)], where each token x(i) \u2208{0, 1}2 is a one-hot function. x(i) represents the occurrence of the i-th ICD code in the patient EHR. In particular, x(i) = [1, 0] represents occurrence of code and x(i) = [0, 1] represents its absence. We assume a sufficiently large set of patient EHRs is available to train a multinomial diffusion model to generate artificial encoded patient EHRs sequences x\u2032. 4.2 Unconditional Generation In Section 3, we introduced multinomial diffusion with a single token, x \u2208RK. In the context of categorical EHRs, we aim to generate a sequence of N tokens with K = 2, denoted by x = [x(1), . . . , x(N)]. Therefore, we need to extend the terminology from Section 3. We define the sequence of tokens at the t-th time step as xt = [x(1) t , . . . , x(N) t ], where x(i) t represents the i-th token at diffusion step t. Multinomial noise qnoise is added to each token in the sequence independently during the diffusion process, q(xt|xt\u22121, x0) = N Y i=1 Cat \u0000x(i) t ; \u03b2tx(i\u22121) t\u22121 + (1 \u2212\u03b2t)qnoise \u0001 . The reverse sampling procedure uses the predictor p\u03b8(\u00b7|xt) with the following neural network architecture: z0,t = xt = [x(1) t , . . . , x(N) t ] 5 z\u2032\u2032 l,t = [z(1) l\u22121,t, . . . , z(N) l\u22121,t] + Epos + Etime, z\u2032 l,t = LinMSA(LN(z\u2032\u2032 l\u22121,t)) + z\u2032\u2032 l\u22121,t, zL,t = ParallelMLP(LN(z\u2032 L\u22121,t)), Output = [softmax(z(1) L,t), . . . , softmax(z(N) L,t )] (4) where Epos, Etime \u2208R2\u00d7D represents the position embedding and time embedding respectively, the variable l indexes the layers belonging to the set {1, . . . , L}, LinMSA refers to the linear-time multi-head self-attention block proposed by Wang et al. (2020) and LN is an abbreviation for layer normalization. For each dimension of b x0, we apply a multilayer perceptron layer to obtain the logit, abbreviated as ParallelMLP. The softmax function transforms the last-layer latent variable zi L into the conditional probability p\u03b8(\u00b7|xt), serving as the final softmax layer. The details of our denoise model are provided in Fig. 6 in Appendix A.2. 4.3 Conditional Generation with Classifier Guidance The goal of conditional generation is to generate p\u03b8(x|c) close to qdata(x|c), where c denotes a context, such as the presence of a single or group of ICD codes in a patient EHR. c is not available at training time, but we assume access to a classifier p(c|x) that is close to the conditional distribution qdata(c|x). Then given an unconditional EHR generator p\u03b8(x) and classifier p(c|x), we propose a training-free conditional generator as follows p\u03b8(x|c) \u221dp\u03b8(x) \u00b7 p(c|x). (5) Since qdata(x|c) \u221dqdata(x) \u00b7 qdata(c|x), we can expect p\u03b8(x|c) in (5) is close to qdata(x|c) provided the unconditional generator p\u03b8(x) is close to qdata(x) and the classifier p(c|x) is close to qdata(c|x). To sample from (5), we apply the following guided multinomial diffusion procedure: p\u03b8(xt\u22121|xt, c) = X b x0 q(xt\u22121|b x0, xt)p\u03b8(b x0|xt, c), where b x0 is the latent variable that predicts x0. Since b x0 lies in a discrete space, we cannot directly use the Langevin dynamics in the space of b x0. However, the last-layer latent variable zL,t in (4) (before the softmax layer) lies in a continuous space. We have the following: p\u03b8(b x0|xt, c) = Z p\u03b8(b x0|zL,t)p\u03b8(zL,t|xt, c)dzL. Therefore, we can use the plug-and-play method (Dathathri et al., 2019) with the latent space zL, which has been recently employed in text generation (Dathathri et al., 2019) and protein design (Gruver et al., 2023). In particular, we introduce a modified latent variable y(k) for zL,t, which is initialized as y(0) \u2190zL,t. Then we iterative apply the following update with Langevin dynamics y(k+1) \u2190y(k) \u2212\u03b7\u2207y(k)[DKL(y(k)) \u2212V\u03b8(y(k))] + p 2\u03b7\u03c4\u03f5, where the energy function V\u03b8(y(k)) = log(p(c|y(k))) = log \u0000 P b x0 p\u03b8(b x0|y(k))p(c|b x0) \u0001 and DKL(y(k)) = \u03bbKL(p\u03b8(b x0|y(k))||p\u03b8(b x0|y(0))) is the Kullback\u2013Leibler (KL) divergence for regularization of the guided Markov transition. The gradient of the energy term \u2207y(k)V\u03b8 drives the hidden state y(k) towards high probability of p(c|y(k)). The gradient of the regularization term \u2207y(k)DKL ensures the guided transition distribution still maximizes the likelihood of the diffusion model. For a more detailed discussion, see Appendix C. 6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of synthetic data Med-WGAN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EMR-WGAN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EHRDiff 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EHR-D3PM (a) Spearm corr = 0.82 (b) Spearm corr = 0.83 (c) Spearm corr = 0.75 (d) Spearm corr = 0.94 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of synthetic data Med-WGAN 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EMR-WGAN 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EHRDiff 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EHR-D3PM (e) Spearm corr = 0.79 (f) Spearm corr = 0.80 (g) Spearm corr = 0.71 (h) Spearm corr = 0.93 Figure 1: Comparison of prevalence in synthetic data and real data (MIMIC). The second row represents the prevalence of the first row in the low data regime. The prevalence is computed on 10K samples as the MIMIC dataset is relatively small. The dashed diagonal lines represent the perfect matching of code prevalence between synthetic data and real EHR data. Pearson correlations are very high for all methods and thus not used as a metric to compare different methods. 0 5 10 15 20 25 30 35 40 Feature number per record 0.00 0.02 0.04 0.06 0.08 0.10 Density Med-WGAN Real Synthetic 0 5 10 15 20 25 30 35 40 Feature number per record 0.00 0.02 0.04 0.06 0.08 0.10 EMR-WGAN Real Synthetic 0 5 10 15 20 25 30 35 40 Feature number per record 0.00 0.02 0.04 0.06 0.08 0.10 EHRDiff Real Synthetic 0 5 10 15 20 25 30 35 40 Feature number per record 0.00 0.02 0.04 0.06 0.08 0.10 EHR-D3PM Real Synthetic Figure 2: Density comparison of per-record feature number for synthetic and real data for the MIMIC dataset. The number of features per record is the sum of ICD codes present in each sample. The number of bins is 40, and the range of feature number values is (0, 40). 5 Experiments In this section, we apply our method to three EHR datasets, including the widely used public MIMIC-III dataset and two larger private datasets from a large health institute1. We compare 1To comply with the double-blind submission policy, we withhold the name of the institution providing the datasets; should the paper be accepted, we will provide these details. 7 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of synthetic data Med-WGAN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EMR-WGAN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EHRDiff 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EHR-D3PM (a) Spearm corr = 0.91 (b) Spearm corr = 0.98 (c) Spearm corr = 0.85 (d) Spearm corr = 0.99 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of synthetic data Med-WGAN 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EMR-WGAN 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EHRDiff 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EHR-D3PM (e) Spearm corr = 0.87 (f) Spearm corr = 0.91 (g) Spearm corr = 0.79 (h) Spearm corr = 0.97 Figure 3: Comparison of prevalence in synthetic data and real data D1. The second row represents the prevalence of the first row in the low data regime. The prevalence is computed on 200K samples. The dashed diagonal lines represent the perfect matching of code prevalence between synthetic data and real EHR data. Spearman correlations between synthetic data and real data are reported. our method to popular and state-of-the-art EHR generative models in terms of fidelity, utility and privacy. 5.1 Experiment Setup Datasets. Public Datasets MIMIC-III (Johnson et al., 2016) includes deidentified patient EHRs from hospital stays. For each patient\u2019s EHR, we extract the diagnosis and procedure ICD-9 codes and truncate the codes to the first three digits. This dataset includes a patient population of size 46,520. Private Datasets We consider two private datasets of patient EHRs from a large healthcare institution. For each, we extract the diagnosis and procedure ICD-10 codes and truncate the codes to the first three digits. The first dataset, denoted by D1, includes a patient population of size 1,670,347 and has sparse binary features; the second dataset, denoted by D2, includes a patient population of size 1,859,536 and has relatively denser binary features from a different corpus. Diseases of Interest. To investigate the utility of our proposed method, we consider using the generated synthetic EHR data to learn classifiers to predict six chronic diseases: type-II diabetes, chronic obstructive pulmonary disease (COPD), chronic kidney disease (CKD), asthma, hypertension heart and osteoarthritis. The prevalence of these diseases in each dataset is provided in Tabel 4 in Appendix A. 8 5.2 Baselines Med-WGAN A number of GAN models (Choi et al., 2017b; Baowaly et al., 2018; Torfi and Fox, 2020a) have been proposed for realistic EHR generation. Torfi and Fox (2020a) utilizes convolutional neural networks, which is less applicable to most EHR generation as the correlated ICD codes are not in neighboring dimensions. Med-WGAN is selected as a baseline since it incorporates stable training techniques (Gulrajani et al., 2017; Hjelm et al., 2017) and has relatively robust performance. EMR-WGAN Zhang et al. (2019) Different from other GAN models, which use an autoencoder to first transform the raw EHR data into a low-dimensional continuous vector, EMR-WGAN is directly trained on discrete EHR data. EHRDiff Yuan et al. (2023) is the only diffusion model directly designed for synthesizing tabular EHR with an open-source codebase. As the code of other diffusion models (Yuan et al., 2023; Ceritli et al., 2023) for tabular EHRs are not available, we select EHRDiff as a baseline. 0 10 20 30 40 50 60 Feature number per record 0.00 0.01 0.02 0.03 0.04 0.05 0.06 Density Med-WGAN Real Synthetic 0 10 20 30 40 50 60 Feature number per record 0.00 0.01 0.02 0.03 0.04 0.05 EMR-WGAN Real Synthetic 0 10 20 30 40 50 60 Feature number per record 0.00 0.01 0.02 0.03 0.04 0.05 0.06 EHRDiff Real Synthetic 0 10 20 30 40 50 60 Feature number per record 0.00 0.01 0.02 0.03 0.04 0.05 EHR-D3PM Real Synthetic Figure 4: Density comparison of per-record feature number between synthetic data and real data D1. The number of features per record is computed summing the ICD codes present in each sample. The number of bins is 65 and the range of feature number values is (0, 65). 5.3 Evaluation Metrics Dimension-wise Prevalence We compute dimension-wise prevalence by taking the mean of the data in each dimension. Dimension-wise prevalence is a vector which has the same dimension as the input data. Dimension-wise prevalence captures the marginal feature distribution of the data. We compute the Spearman correlation between prevalence in the synthetic data and prevalence in the real data. Correlation Matrix Distance (CMD) measures the difference between the covariance matrix of the synthetic data and the covariance matrix of real data. We first compute the empirical covariance matrices of the synthetic data and real data respectively and take the difference between these two matrices. Then we calculate the Frobenius norm of the difference matrix as distributional distance. Maximum Mean Discrepancy (MMD) is one of most common metrics to measure the difference between two distributions in distributional space. We compute the MMD for a set of synthetic data and a set of real test data. We employ a mixture of kernel-based methods (Li et al., 2017) to estimate MMD to improve its robustness. The detailed formula is given in Eq.(6) in Appendix A.4. Downstream Prediction To evaluate the utility of the generated synthetic data, we evaluate the accuracy of classifiers trained to predict the diseases of interest mentioned above using synthetic data. We train the classifiers to predict the ICD code that corresponds to the disease of interest using all other available ICD codes as features. We train the classification model using synthetic data and evaluate its performance on real test data. We adopt the state-of-the-art robust classification model 9 for tabular data given in (Ke et al., 2017). The most reliable classification model is one trained on real data; we use this as a benchmark to represent an upper bound for classification accuracy. Membership Inference Risk (MIR) evaluates the risk that an attacker can infer the real samples used for training the generative model given generated synthetic EHR data or the model parameters. We consider the attack model MIR (Duan et al., 2023) proposed for diffusion models with continuous data. Following Yan et al. (2022), we evaluate MIR on discrete data. For each EHR in this set of real training and evaluation data, we calculate the minimum L2 distance with respect to the synthetic EHR data. The real EHR whose distance is smaller than a preset threshold is predicted as the training EHR. We report the prediction F1 score to demonstrate the performance of each model under membership inference risk. 5.4 Experiment Results Table 1: Synthetic data utility. Disease prediction from ICD codes using the real dataset D1. AUPRC and AUROC are reported. AUPR and AUC in table are short for AUPRC and AUROC respectively. We use synthetic data of size 160K to train the classifier and 200K real test data samples to evaluate the different methods. 80% of the test data are bootstrapped 50 times to compute 95% confidence intervals (CI). The values of CI for all cases are between 0.001 and 0.003 and therefore not shown in table, provided on Appendix B. Diabetes Asthma COPD CKD HTN-Heart Osteoarthritis AUPR AUC AUPR AUC AUPR AUC AUPR AUC AUPR AUC AUPR AUC Real Data 0.702 0.808 0.288 0.759 0.675 0.867 0.806 0.913 0.253 0.832 0.296 0.789 Med-WGAN 0.628 0.757 0.149 0.595 0.578 0.806 0.722 0.873 0.114 0.625 0.192 0.661 EMR-WGAN 0.656 0.770 0.193 0.642 0.603 0.815 0.753 0.885 0.151 0.686 0.219 0.689 EHRDiff 0.670 0.780 0.232 0.722 0.642 0.856 0.782 0.902 0.150 0.714 0.245 0.759 EHR-D3PM 0.693 0.801 0.263 0.748 0.655 0.860 0.796 0.908 0.229 0.821 0.278 0.782 Fidelity We first evaluate the learnt distributions using the MIMIC-III dataset in Fig. 1, dataset D1 in Fig. 3 and dataset D2 in Fig. 7 in Appendix B.1. We compare prevalence in synthetic data with prevalence in real data for each dimension. In Fig. 1, Fig. 3 and Fig. 7, we can see that the prevalence for our method EHR-D3PM aligns best with the real data. EHR-D3PM consistently has the highest Spearman correlation. We further observe that Med-WGAN, EMR-WGAN and EHRDiff fail to provide an unbiased estimation of the distribution in the low prevalence regime, which corresponds to rare conditions. This failure is mild when the dataset has dense features, as shown in Fig. 7 of D2, but is obvious when the dataset has sparse features, as shown in Fig. 3 of D1. Next we compare our method in terms of feature number per record, which is calculated by summing the ICD codes in each sample, using the MIMIC dataset-III in Fig. 2, dataset D1 in Fig. 4 and dataset D2 in Fig. 8 in Appendix B.1. We compare feature numbers for synthetic data with that of real data. As we see in Fig. 2, Fig. 4 and Fig. 8, Med-WGAN, EMR-WGAN and EHRDiff demonstrate poor performance in estimating the mode or the tail of the density. When the datasets are large, our method tends to provide a perfect estimation of the feature number for the real data. Finally we compare our method in terms of the CMD and MMD metrics. Tabel 2 shows that our EHR-D3PM significantly outperforms all baselines, particularly with the larger datasets D1 and D2. This indicates our method can learn much better pairwise correlations between different feature 10 dimensions. Table 2 also shows that the distribution of synthetic data generated by our method has the least discrepancy with real data distribution on all three datasets. Table 2: Additional fidelity metrics (CMD and MMD) and privacy metric (MIR) on MIMIC, dataset D1 and dataset D2. 95% confidence intervals are provided in Table 11 and Table 11 on Appendix B. Fidelity CMD(\u2193) Fidelity MMD(\u2193) Privacy MIR(\u2193) MIMIC D1 D2 MIMIC D1 D2 MIMIC D1 D2 Med-WGAN 27.540 18.107 28.942 0.078 0.075 0.086 0.440 0.339 0.398 EMR-WGAN 26.658 11.869 21.438 0.053 0.018 0.024 0.456 0.358 0.415 EHRDiff 25.447 23.208 18.941 0.009 0.023 0.046 0.445 0.353 0.421 EHR-D3PM 21.128 7.692 10.255 0.003 0.012 0.019 0.432 0.344 0.406 Utility We now apply our method to disease classification downstream tasks. Since MIMIC-III contains a much smaller patient population than the private datasets, which may not provide a valid test data benchmark for disease classification, we focus on the private datasets D1 and D2. In Table 4, we observe that the prevalence of most diseases is low in datasets D1 and D2. The training and test data set sizes are 160K and 200K. From Table 1, the average increase in AUPRC and AUROC over the strongest baseline (EHRDiff) is 3.90% and 2.57% respectively for dataset D1. From Table 5, the average increase in AUPRC and AUROC over the strongest baseline (EHRDiff) is 3.22% and 3.12% respectively for dataset D2. Privacy We evaluate the membership inference risk (MIR) of our method and other baselines on each dataset. Table 2 shows the MIRs of our method is lower than other baslines across all datasets, indicating our method has mild vulnerability to privacy risk compared to existing baselines. As there is a trade-off between privacy and fidelity, incorporating differential privacy to further reduce the privacy risk in diffusion models is an interesting direction, which is largely unexplored in discrete diffusion models. 0 10K 30K 50K Augmented Data Size 0.930 0.935 0.940 0.945 0.950 AUROC Diabetes 0 10K 30K 50K Augmented Data Size 0.88 0.89 0.90 0.91 0.92 0.93 0.94 0.95 COPD Real Source Uncond Sample Real Sample Guided Sample 0 10K 30K 50K Augmented Data Size 0.81 0.82 0.83 0.84 0.85 Asthma 0 10K 30K 50K Augmented Data Size 0.85 0.86 0.87 0.88 Osteoarthritis Figure 5: Synthetic data augmentation for disease classifications from ICD codes based on dataset D2. The size of the real source data for training the LGBM classifier is 5000, as indicated by the dashed purple line. We augment the source training data with synthetic data to train the LGBM classifier. \"Uncond Samples\" stands for the synthetic data generated by our unconditional sampler. Guided samples are synthetic data generated by our proposed guided sampler for each disease. To minimize noise from evaluation, we adopt 200K real test data to evaluate all experiments and report test AUROC for comparison. 80% of the test data are bootstrapped 50 times to compute 95% CI, which is added in shaded region. 11 5.5 Guided Generation In this section, we apply our guided sampling method to generate conditional samples for different disease conditions. For each condition, we apply our guided sampler and generate a set of synthetic data. Table 3 lists the prevalence of four diseases in the real data D2. As we can see, the prevalence of diabetes, COPD, Asthma and Osteoarthritis are very low. The prevalence of samples generated by our unconditional sampling method matches the prevalence of each in the real datam, while synthetic samples generated by our guided sampler demonstrate a more balanced ratio. Table 3: Prevalence of diseases in different sample groups. \"Real\" means real dataset D2. \"Uncond\" is short for samples generated by our unconditional method. \"Guided\" stands for samples generated by our guided sampler. Diabetes COPD Asthma Osteoarthritis Real 0.068 0.017 0.079 0.034 Uncond 0.069 0.015 0.073 0.036 Guided 0.202 0.042 0.161 0.137 In the following, we utilize synthetic samples to augment the training dataset when training downstream disease classifiers. The size of the real source data for training classifiers is 5000. We augment the original training data with data from three different groups: real data, synthetic data generated by our unconditional sampling method and our guided sampling method. We report AUROC in Fig. 5 and AUPRC in Fig. 9 in AppendixB.2 to evaluate accuracy. We can see that classifiers trained with synthetic data augmentation always improve the vanilla baseline (classifier trained with the original source data). We also observe that the data augmentation by guided sampling consistently outperform the data augmentation by unconditional sampling. It is interesting to observe that the data augmentation by guided sampling has consistently higher AUROC than real data augmentation. We observe this to be because the synthetic samples generated by guided sampling contain richer information in cases of diseases of low prevalence. 6 Conclusion and Future Work In this paper, we introduced a novel generative model for synthesizing realistic EHRs EHR-D3PM. Leveraging the latest advancements in discrete diffusion models, EHR-D3PM overcomes the challenges of GAN-based approaches and effectively generates high-quality tabular medical data. Compared with other diffusion-based approaches, EHR-D3PM enables high-quality conditional generation. Our experiment demonstrates that EHR-D3PM not only achieves state-of-the-art performance in fidelity, utility, and privacy metrics but also significantly improves downstream task performance through data augmentation. Further investigations of the vulnerability of diffusion-based generative models in EHR generation, particularly to Membership Inference Attacks (MIAs) (Shokri et al., 2017), is a promising future direction as well as providing formal privacy guarantees, e.g., by incorporating differential privacy, which is largely unexplored in diffusion-based models for discrete data. 12 A Experiment Details. In the following Table 4, we present a concise summary of various diseases along with their corresponding International Classification of Diseases, Ninth Revision (ICD 9) codes. D1, D2 represents the real dataset 1 and real dataset 2, respectively. This table includes common conditions such as Type II Diabetes (T2D), Chronic Kidney Disease (CKD), Chronic Obstructive Pulmonary Disease (COPD), Asthma, Hypertension and Osteoarthritis. Each disease is associated with specific ICD 9 codes that are used for clinical classification and diagnosis purposes. In this paper, we are interested in the diseases listed in Table 4. Table 4: List of Diseases and Corresponding ICD 9 Codes. Disease ICD 9 Code MIMIC Dataset D1 Dataset D2 Diabetes 250.* 0.214 0.261 0.068 Chronic Kidney Disease (CKD) 585.1\u20139 0.106 0.119 0.015 Chronic Obstructive Pulmonary Disease (COPD) 496 0.069 0.136 0.017 Asthma 493.20\u201322 0.051 0.085 0.079 Hypertension (HTN-Heart) 402.* 0.001 0.028 0.006 Osteoarthritis 715.96 0.0197 0.061 0.034 A.1 Dataset Details MIMIC Dataset The MIMIC III dataset includes a patient population of 46,520. There are 651,047 positive codes within 64,314 hospital admission records (HADM IDs). We have implemented an 80/20 split for training and testing purposes. Specifically, this allocates 12,862 records for testing and the remaining 51,451 for training. The histograms in Fig. 2 indicate the density distribution of feature number per record. The dimension is N = 1042. Dataset D1 The first dataset, denoted by D1, includes a patient population of size 1,670,347. We split the whole dataset into 100K for validation, 2000K for testing and the rest 1,370, 347 for training. The number of codes per patient is relatively small, as indicated by the histogram of feature number per record in Fig. 4. The dimension is N = 993. Dataset D2 The second dataset, denoted by D2, includes a patient population of size 1,859,536. We split the whole dataset into 100K for validation, 2000K for testing and the rest 1,559,536 for training. The number of codes per patient is relatively large, as indicated by the histogram of feature number per record in Fig. 8. Although, dataset D2 has relatively denser feature, the prevalence of six chronic diseases we are interested in is pretty low. The dimension is N = 993. A.2 Model Architecture Detail The denoise model in this paper has a uniform architecture, illustrated in Fig. 6. The architecture we propose is tailored for tabular EHRs as it is non-sequential data. While the architecture proposed in multinomial diffusion Hoogeboom et al. (2021b) is designed for sequential data, where neighboring dimensions have semantic correlation. The tabular EHR datasets in our paper don\u2019t have such property. Therefore, we propose a novel transformer-based model for tabular EHRs. One bottleneck of transformer models is that the computational complexity of the attention module is quadratic 13 to the dimension of input data. We adopt an efficient block based on Wang et al. (2020), whose attention operation has linear complexity with respect to the dimension of input data. (a) Denoise Model (b) Transformer Block Figure 6: Architecture of our denoise model. (b) provides the detail of transformer block which has linear complexity with respect to the dimension of input. Axial positional embedding is employed to encode the positional information. We employ sinusoidal positional embedding to time t to the time embedding and then use a two-layer MLPs to map the time embedding into hidden state. In the first layer of the two-layer MLP, we use Softplus activation function. We apply L times of such two-layer MLP to get the hidden state of time embedding to yield the input of each transformer block, as indicated in (a). Positional embedding is added to the embedding of discrete inputs. The input has dimension N and B means the batch size. For notation simplicity, we use all dimension of tabular data has K categories. We use one-hot representation and therefore, the output of the denoise model has shape (B, N, K). The shape of intermediate layers is provided in (a). In (b), \"Proj\" denotes the projection operation proposed in Linformer Wang et al. (2020), which induces the linear complexity of the attention module with respect to the input dimension N. The projection dimension is set as the default value 128 for all experiments in this paper. 14 A.3 Hyper-parameters Hyper-parameters on MIMIC dataset Since the MIMIC III dataset is relatively small, we use a relative small model to train our EHR-D3PM to avoid overfitting to the training data. The hidden dimension 256. The number of multi-attention heads is 8. The number of transformer layers is 5. The number of diffusion steps is 500. In the optimization phrase, we adopt adamW optimizer, and the weight decay in adamW is 1.e-5. The learning rate is 1e-4 and batch size is 256. The beta for expentialLR in learning rate schedule is 0.99. The number of training epochs is 100. It takes less than three hours to finish training this model on A6000 with 48G memory. Hyper-parameters on datasets D1 and D2 The denoise model for datasets D1 and D2 are the same. As datasets D1 and D2 are large, we use a relatively large model. The number of multi-attention heads is 8. The hidden dimension is 512. The number of transformer layers L is 8. The number of diffusion steps is 500. The optimization parameters for both datasets D1 and D2 are also the same. In the optimization phrase, we adopt adamW optimizer. The learning rate is 1e-4 and batch size is 512. The weight decay in adamW is 1.e-5. The beta for expentialLR in learning rate schdule is 0.99. The number of training epochs is 40. It takes one and half day to train one model on A100 with 80G memory. Hyper-parameters of baseline EHRDiff To have a fair comparison with diffusion baseline EHRDiff, we use the same hyper parameters as our proposed diffusion model on all three datasets. The number of diffusion steps in EHRDiff is also 500 and the number of layers in EHRDiff is also 5. The other hyper parameters use the default values in the github implementation of EHRDiff. A.4 Evaluation Metrics MMD The empirical MMD between two distributions P and Q is approximated by MMD(P, Q) = 1 m m X \u03b3=1 \\ MMDk\u03b3(P, Q), (6) where k\u03b3 is a kernel function; m is number of kernels; \\ MMDk\u03b3(P, Q) is estimated by samples {xi}n i=1 \u223cP and {x\u2032 i}n i=1 \u223cQ as follows, \\ MMDk\u03b3(P, Q) = 1 n(n \u22121)[ X i\u0338=j k\u03b3(xi, xj) + X i\u0338=j k\u03b3(x\u2032 i, x\u2032 j)] \u22121 n2 X i,j k\u03b3(xi, x\u2032 j). In our evaluation, we use Gaussian RBF kernel k\u03b3, k\u03b3(x, x\u2032) = exp(\u2212\u2225x \u2212x\u2032\u22252 2h2 \u03b3 ) with bandwidth h\u03b3 = Avg \u22172(\u03b3\u2212m/2), where Avg is the average of pairwise L2 distance between all samples. We choose m = 5 and thus \u03b3 \u2208{1, 2, 3, 4, 5}. 15 A.5 Hyper-parameters of Classifier Models on Downstream Tasks For the downstream tasks, we used a light gradient boosting decision tree model (LGBM) (Ke et al., 2017) as it had uniformly robust prediction performance on all downstream tasks. In all experiments, we set the hyper-parameters of LGBM as follows: n_estimators = 1000, learning_rate = 0.05 max_depth = 10, reg_alpha = 0.5, reg_lambda = 0.5, scale_pos_weight = 1, min_data_in_bin = 128. We also experiment with various sets of hyper parameters which will induce the same conclusion we have in this paper. B Additional Experiments Due to space limit, we leave a bunch of experiment results on appendix. B.1 Additional experiments on unconditional generation Fidelity Fig. 7 provides additional comparison of marginal distribution matching on dataset D2. Since dataset D2 has relatively denser features, the performance of baselines in low prevalence regime is less severe than that of baselines on dataset D1. From the Spearman correlation in low prevalence regime, we can still see that our method significantly outperform baselines. Based on results in Fig. 1, Fig. 3 and Fig. 7, we consistently observe synthetic data by EHRDiff fails to capture the information in low prevalence regime. One reason we articulate is that the foundation of EHRDiff is designed for continuous distribution and cannot be readily applied to the generation of discrete data particularly on data of low prevalence regime. From Fig. 8, we can see that the histogram of feature number per record on synthetic data by our method provides a perfect matching to that of real data. Utility We also apply our methods to downstream prediction tasks on dataset D2, where the prevalence of six chronic diseases is much lower. From Fig 5, we can see that the accuracy of our prediction is still close to the prediction of classifier models trained on real data, which is the classifier baseline. While other baselines have a much larger performance gap when compared with the ideal classifier. Particularly on rare diseases such as hypertension heart, the classifier trained on synthetic data by our model has 8% absolute improvement in AUPRC and AUROC over the strongest baseline on both D1 and D2. From the confidence intervals provided in Table 7, 6, 8 and 9, we confirm that such improvement over the baselines are statistically significant. Table 5: Synthetic data utility. Disease prediction from ICD codes on real dataset D2. AUPRC and AUROC are reported. AUPR and AUC in table are short for AUPRC and AUROC respectively. We use synthetic data of 160K to train the classifier and 200K real test data to evaluate different methods. 80% of test data are bootstrapped 50 times to compute for 95% confidence interval (CI). The values of CI for all cases are between 0.001 and 0.011. T2D Asthma COPD CKD HTN-Heart Osteoarthritis AUPR AUC AUPR AUC AUPR AUC AUPR AUC AUPR AUC AUPR AUC Real data 0.834 0.955 0.581 0.853 0.622 0.951 0.733 0.944 0.278 0.926 0.373 0.893 Med-WGAN 0.725 0.924 0.496 0.819 0.203 0.853 0.166 0.835 0.008 0.500 0.223 0.820 EMR-WGAN 0.734 0.918 0.431 0.747 0.402 0.888 0.628 0.907 0.134 0.844 0.210 0.753 EHRDiff 0.807 0.950 0.549 0.843 0.548 0.936 0.690 0.916 0.141 0.822 0.319 0.875 EHR-D3PM 0.821 0.952 0.572 0.853 0.607 0.947 0.714 0.944 0.226 0.911 0.348 0.889 16 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of synthetic data Med-WGAN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EMR-WGAN 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EHRDiff 0.0 0.1 0.2 0.3 0.4 0.5 0.6 prevalence of real data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 EHR-D3PM (a) Spearm corr = 0.96 (b) Spearm corr = 0.99 (c) Spearm corr = 0.95 (d) Spearm corr = 0.99 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of synthetic data Med-WGAN 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EMR-WGAN 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EHRDiff 0.00 0.01 0.02 0.03 0.04 0.05 prevalence of real data 0.00 0.01 0.02 0.03 0.04 0.05 EHR-D3PM (e) Spearm corr = 0.95 (f) Spearm corr = 0.98 (g) Spearm corr = 0.94 (h) Spearm corr = 0.98 Figure 7: Comparison between prevalence of synthetic data and prevalence of real dataset D2. It measures accuracy of the marginal distribution for each ICD code on synthetic samples. The second row represents the prevalence of the first row in low data regime. The prevalence is computed on 200K samples. The dash diagonal lines represent the perfect matching of code prevalence between synthetic data and real EHR data. The Pearson correlations for four methods are all greater than 0.99 and will not be used as a metric to evaluate different methods. 0 10 20 30 40 50 60 70 80 90 Feature number per record 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Density Med-WGAN Real Synthetic 0 10 20 30 40 50 60 70 80 90 Feature number per record 0.00 0.01 0.02 0.03 0.04 EMR-WGAN Real Synthetic 0 10 20 30 40 50 60 70 80 90 Feature number per record 0.00 0.01 0.02 0.03 0.04 0.05 0.06 EHRDiff Real Synthetic 0 10 20 30 40 50 60 70 80 90 Feature number per record 0.00 0.01 0.02 0.03 0.04 EHR-D3PM Real Synthetic Figure 8: Density comparison of per-record feature number between synthetic data and real dataset D2. The number of feature per record is computed by sum of ICD codes on each sample. Number of bins is 90 and the range of feature number values is (0, 90). B.2 Additional experiments on guided generation We provide additional experiment results on guided generation. We augment the real data with synthetic data generated by our sampling method and train a downstream classifier. We measure the performance of all classifiers on real test data. From Fig. 9 and Fig. 5, we see that the classifier 17 trained on augmented training data with synthetic data, either by our unconditional sampling method or by our guided sampling method, consistently outperforms the classifier trained with original source data (vanilla baseline). In all cases, the relative increase of AUPRC over vanilla baseline is more than 3%; in the classification of COPD, the relative improvement over the vanilla baseline is more than 30%. This clearly indicates that our method can be applied to augment the training data of downstream classification tasks when the real dataset is scarce. More importantly, the data augmentation by guided sampler consistently outperforms the data augmentation with unconditional sampler. We observe this to be because the synthetic samples generated by guided sampling contain richer information in diseases of low prevalence. A most balanced training data with positive label will enhance the performance of classifiers and reduce the risk of over fitting to negative class. 0 10K 30K 50K Augmented Data Size 0.78 0.79 0.80 0.81 0.82 0.83 AUPRC Diabetes 0 10K 30K 50K Augmented Data Size 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 COPD Real Source Uncond Sample Real Sample Guided Sample 0 10K 30K 50K Augmented Data Size 0.52 0.53 0.54 0.55 0.56 0.57 Asthma 0 10K 30K 50K Augmented Data Size 0.26 0.27 0.28 0.29 0.30 0.31 0.32 0.33 0.34 Osteoarthritis Figure 9: Synthetic data augmentation for disease classification from ICD codes based on dataset D2. The size of real source data for training LGBM classifier is 5000, as indicated in dashed line. We augment the original source training data with synthetic data to train LGBM classifier. \"Uncond Samples\" stands for the synthetic data generated by our unconditional sampler. Guided samples are synthetic data generated by our proposed guided sampler for each disease. To minimize noise from evaluation, we adopt 200K real test data to evaluate all experiments and report test AUROC for comparison. 80% of the test data are bootstrapped 50 times to compute 95% confidence intervals (CI), which is added as shaded region. Table 6: Synthetic data utility. Disease prediction from ICD codes on real data D1. AUPRC is reported. We use synthetic data of 160K to train the classifier and 200K real test data to evaluate different methods. 80% of test data are bootstrapped for 50 times to compute for 95% confidence interval. T2D Asthma COPD CKD HTN-Heart Osteoarthritis Real Data 0.702 \u00b1 0.002 0.288 \u00b1 0.004 0.675 \u00b1 0.002 0.806 \u00b1 0.002 0.253 \u00b1 0.003 0.296 \u00b1 0.003 Med-WGAN 0.628 \u00b1 0.002 0.149 \u00b1 0.002 0.578 \u00b1 0.002 0.722 \u00b1 0.002 0.114 \u00b1 0.001 0.192 \u00b1 0.003 EMR-WGAN 0.656 \u00b1 0.002 0.193 \u00b1 0.002 0.603 \u00b1 0.002 0.753 \u00b1 0.002 0.151 \u00b1 0.003 0.219 \u00b1 0.003 EHRDiff 0.670 \u00b1 0.002 0.232 \u00b1 0.003 0.642 \u00b1 0.002 0.782 \u00b1 0.002 0.150 \u00b1 0.002 0.245 \u00b1 0.003 EHR-D3PM 0.693 \u00b1 0.002 0.263 \u00b1 0.003 0.655 \u00b1 0.002 0.796 \u00b1 0.002 0.229 \u00b1 0.003 0.278 \u00b1 0.003 18 Table 7: Synthetic data utility. Disease prediction from ICD codes on real dataset D1. AUROC is reported. We use synthetic data of 160K to train the classifier and 200K real test data to evaluate different methods. 80% of test data are bootstrapped for 50 times to compute 95% confidence interval. T2D Asthma COPD CKD HTN-Heart Osteoarthritis Real data 0.808 \u00b1 0.001 0.759 \u00b1 0.002 0.867 \u00b1 0.001 0.913 \u00b1 0.001 0.832 \u00b1 0.001 0.789 \u00b1 0.001 Med-WGAN 0.757 \u00b1 0.001 0.595 \u00b1 0.002 0.806 \u00b1 0.001 0.873 \u00b1 0.001 0.625 \u00b1 0.002 0.661 \u00b1 0.002 EMR-WGAN 0.770 \u00b1 0.001 0.642 \u00b1 0.002 0.815 \u00b1 0.001 0.885 \u00b1 0.001 0.686 \u00b1 0.002 0.689 \u00b1 0.002 EHRDiff 0.789 \u00b1 0.001 0.722 \u00b1 0.002 0.856 \u00b1 0.001 0.902 \u00b1 0.001 0.714 \u00b1 0.002 0.759 \u00b1 0.002 EHR-D3PM 0.801 \u00b1 0.001 0.748 \u00b1 0.002 0.860 \u00b1 0.001 0.908 \u00b1 0.001 0.821 \u00b1 0.002 0.782 \u00b1 0.002 Table 8: Synthetic data utility. Disease prediction from ICD codes on real data D2. AUPRC is reported. We use synthetic data of 160K to train the classifier and 200K real test data to evaluate different methods. 80% of test data are bootstrapped 50 times to compute 95% confidence interval. T2D Asthma COPD CKD HTN-Heart Osteoarthritis Real data 0.834 \u00b1 0.002 0.581 \u00b1 0.005 0.622 \u00b1 0.009 0.733 \u00b1 0.006 0.278 \u00b1 0.011 0.373 \u00b1 0.005 Med-WGAN 0.725 \u00b10.003 0.496 \u00b1 0.005 0.203 \u00b1 0.007 0.166 \u00b1 0.005 0.008 \u00b1 0.001 0.223 \u00b1 0.004 EMR-WGAN 0.734 \u00b10.003 0.431 \u00b1 0.004 0.402 \u00b1 0.007 0.628 \u00b1 0.009 0.134 \u00b1 0.008 0.210 \u00b1 0.004 EHRDiff 0.807 \u00b10.003 0.549 \u00b1 0.004 0.548 \u00b1 0.007 0.690 \u00b1 0.008 0.141 \u00b1 0.009 0.319 \u00b1 0.005 EHR-D3PM 0.821 \u00b10.002 0.572 \u00b1 0.004 0.607 \u00b1 0.007 0.714 \u00b1 0.007 0.226 \u00b1 0.008 0.348 \u00b1 0.006 Table 9: Synthetic data utility. Disease prediction from ICD codes on real data D2. AUROC is reported. We use synthetic data of 160K to train the classifier and 200K real test data to evaluate different methods. 80% of test data are bootstrapped 50 times to compute 95% confidence interval. T2D Asthma COPD CKD HTN-Heart Osteoarthritis Real data 0.955 \u00b1 0.001 0.853 \u00b1 0.002 0.951 \u00b1 0.002 0.944 \u00b1 0.002 0.926 \u00b1 0.003 0.893 \u00b1 0.002 Med-WGAN 0.924 \u00b1 0.001 0.819 \u00b1 0.002 0.853 \u00b1 0.003 0.835 \u00b1 0.004 0.5 \u00b1 0.001 0.820 \u00b1 0.003 EMR-WGAN 0.918 \u00b1 0.001 0.747 \u00b1 0.002 0.888 \u00b1 0.003 0.907 \u00b1 0.003 0.844 \u00b1 0.005 0.753 \u00b1 0.003 EHRDiff 0.950 \u00b1 0.001 0.843 \u00b1 0.002 0.936 \u00b1 0.002 0.916 \u00b1 0.004 0.822 \u00b1 0.006 0.875 \u00b1 0.002 EHR-D3PM 0.952 \u00b1 0.001 0.853 \u00b1 0.002 0.947 \u00b1 0.002 0.944 \u00b1 0.002 0.911 \u00b1 0.004 0.889 \u00b1 0.002 Table 10: Fidelity metrics (CMD and MMD) on MIMIC, dataset D1 and dataset D2. 95% confidence intervals are provides. CMD(\u2193) MMD(\u2193) MIMIC D1 D2 MIMIC D1 D2 Med-WGAN 27.540\u00b10.628 18.107\u00b10.128 28.942\u00b10.196 0.078\u00b10.0089 0.075\u00b10.011 0.086\u00b10.013 EMR-WGAN 26.658\u00b10.638 11.869\u00b10.108 21.438\u00b10.146 0.053\u00b10.0054 0.018\u00b10.003 0.024\u00b10.004 EHRDiff 25.447\u00b10.488 23.208\u00b10.088 18.941\u00b10.092 0.009\u00b10.0013 0.023\u00b10.003 0.046\u00b10.005 EHR-D3PM 21.128\u00b10.393 7.692\u00b10.028 10.255\u00b10.037 0.003\u00b10.0004 0.012\u00b10.001 0.019\u00b10.002 C Additional Details of EHR-D3PM If x is a continuous variable, the most common way to sample from the posterior p\u03b8(x|c) \u221d p\u03b8(x) \u00b7 p(c|x) is using the following Langevin dynamics, x(k+1) \u2190x(k) + \u03b7\u03c41\u2207log p(c|x) + \u03b7\u2207log p\u03b8(x) + \u221a\u03b7\u03c42\u03f5, (7) 19 Table 11: Additional fidelity metric (MCAD) on MIMIC, dataset D1 and dataset D2. MCAD(\u2193) MIMIC D1 D2 Med-WGAN 0.1896\u00b10.0024 0.1871\u00b10.0016 0.1944\u00b10.0017 EMR-WGAN 0.1546\u00b10.0167 0.1625\u00b10.0013 0.1572\u00b10.0015 EHRDiff 0.1439\u00b10.0015 0.1687\u00b10.0014 0.1764\u00b10.0016 EHR-D3PM 0.1013\u00b10.0010 0.0873\u00b10.0007 0.1081\u00b10.0009 Table 12: Privacy metric (MIR) on MIMIC, dataset D1 and dataset D2. Privacy MIR(\u2193) MIMIC D1 D2 Med-WGAN 0.440\u00b10.0034 0.339\u00b10.0018 0.398\u00b10.0019 EMR-WGAN 0.456\u00b10.0035 0.358\u00b10.0017 0.415\u00b10.0020 EHRDiff 0.445\u00b10.0034 0.353\u00b10.0015 0.421\u00b10.0019 EHR-D3PM 0.432\u00b10.0034 0.344\u00b10.0016 0.406\u00b10.0018 where \u03f5 \u223cN(0, I). (7) has been applied in image (Dhariwal and Nichol, 2021) and recently be applied to EHR generation with Gaussian diffusion (He et al., 2023). In practice, \u03c42 is always chosen to be zero in practice, and we will generally use V (x) := log(p(c|x)) to replace the likelihood that we want to maximize in (7). For discrete data, (7) is intractable since we can\u2019t get the gradient backpropagation via \u2207log p\u03b8(x). In addition, (7) can\u2019t guarantee that x(k+1) lies in the category {1, . . . , K} after update. Therefore, we need to do Langevin updates on the latent space zL,t y(k+1) \u2190y(k) \u2212\u03b7\u2207y(k)[DKL(y(k)) \u2212V\u03b8(y(k))] + p 2\u03b7\u03c4\u03f5, where y(k) is the modification of zL,t, and DKL(y(k)) = \u03bbKL(p\u03b8(b x0|y(k))||p\u03b8(b x0|y(0))) is the KL divergence for regularization of the guided Markov transition. The gradient of the KL term plays a similar role as \u2207p\u03b8(x) in (7). It will be interesting to leverage deterministic updates Liu and Wang (2016); Han and Liu (2018) to accelerate this process. Example: Suppose that the context c is to generate a EHR x such that x has diabete disease, i.e., the k-th token of x equals [1, 0], where k = 156 for MIMIC data set. Then we have p(c|b x0) = 1 if k-th token of x equals [1, 0] and p(c|b x0) = 0 otherwise. In addition p\u03b8(b x0|y(k)) is the k-th position output of softmax layer when input y(k). Then we can compute the energy function as follows, V\u03b8(y(k)) = log(p(c|y(k))) = log \u0012 X b x0 p\u03b8(b x0|y(k))p(c|b x0) \u0013 . In all experiments of guided generation, the number of Langevin update steps is 10, \u03b7 = 0.1 and \u03bb = 0.01." + }, + { + "url": "http://arxiv.org/abs/2404.14771v1", + "title": "Music Style Transfer With Diffusion Model", + "abstract": "Previous studies on music style transfer have mainly focused on one-to-one\nstyle conversion, which is relatively limited. When considering the conversion\nbetween multiple styles, previous methods required designing multiple modes to\ndisentangle the complex style of the music, resulting in large computational\ncosts and slow audio generation. The existing music style transfer methods\ngenerate spectrograms with artifacts, leading to significant noise in the\ngenerated audio. To address these issues, this study proposes a music style\ntransfer framework based on diffusion models (DM) and uses spectrogram-based\nmethods to achieve multi-to-multi music style transfer. The GuideDiff method is\nused to restore spectrograms to high-fidelity audio, accelerating audio\ngeneration speed and reducing noise in the generated audio. Experimental\nresults show that our model has good performance in multi-mode music style\ntransfer compared to the baseline and can generate high-quality audio in\nreal-time on consumer-grade GPUs.", + "authors": "Hong Huang, Yuyi Wang, Luyao Li, Jun Lin", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The study of musical styles is important for the develop- ment of music. Incorporating different styles into com- positions can lead to new and innovative music. Trans- ferring musical styles can create works that pay homage to traditional styles while incorporating contemporary ele- ments. By studying how different styles can be combined and transformed, musicians can create new forms of artis- tic expression. When discussing the transfer of musical style, it is typ- ically believed that music can be broken down into two elements: content and style. The goal of music style trans- fer is to maintain the content of the music while modify- ing the style. With the rapid development of deep genera- tive models, various models such as autoregressive models, generative adversarial networks, variational autoencoders, and stream-based models have actively promoted the de- velopment of speech synthesis and music generation. Fur- thermore, many academics have used these models to re- *Corresponding author Copyright: \u00a92023 Hong Huang et al. This is an open-access article dis- tributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. search musical style transfer. MIDI-VAE, a neural net- work model based on variational autoencoders, was used by Brunner et al. [1] to convert the style of polyphonic music with several instrumental tracks. The same year, Brunner et al. [2] offered a different approach that in- volved converting midi format audio into a piano rolling matrix, training CycleGAN with the matrix, and then pro- ducing converted midi audio. However, this method can only transfer the style from the playing dimension. Huang et al. [3] proposed Timbretron by extracting CQT features of the audio, then converting them into timbre through Cy- cleGAN, and finally synthesizing CQT features into orig- inal audio waveforms using pre-trained WaveNet. Their method can capture higher resolution at lower frequencies and maintain equal variance of pitch energy, but the gen- erated audio quality is still inadequate. Donahue et al. [4] enhanced the effect of multi-instrument music generation through cross-domain training based on Transformer, but the quality of synthesized audio is still inadequate. Hung et al. [5] proposed a deep learning model for rearrang- ing any music, resulting in a \u201dstylistic shift\u201d without much impact on the tonal substance. Bonnici et al. [6] used a variational autoencoder combined with a generative adver- sarial network to construct a meaningful representation of source audio and generate a realistic generation of the tar- get audio. Noam et al. [7] proposed a general music trans- lation network that achieves timbre conversion by train- ing a WaveNet encoder and multiple WaveNet decoders. This method can convert from one timbre domain to mul- tiple timbre domains, but it requires training multiple de- coders to adapt to different styles, which is computation- ally expensive, and the synthesized audio speed is slow. Denoising Diffusion Probability Models (DDPMs) [8] and Score Matching (SM) [9] are recently proposed methods that have achieved good results in the fields of speech syn- thesis and music generation. The aforementioned studies have achieved promising results in their respective research directions, but they mainly focus on transferring a single attribute of music (timbre, performance style, composition style), and previous methods suffer from artifacts in the generated spectrograms. Considering many-to-many style migration, previous methods have suffered from complex design structures, high computational overhead, and slow generation of audio. To overcome these limitations, this study uses DM, an- other type of generative model, whose synthesis process extracts the required generated samples from noise through arXiv:2404.14771v1 [cs.SD] 23 Apr 2024 iterative steps. As the number of iterations increases, the quality of the synthesis improves. However, directly ex- tending DMs to audio generation requires a large amount of computational resources [10] and cannot solve the prob- lem of slow generation speed. To address these issues, this study proposes a general and efficient music style trans- fer framework based on the latent diffusion model (LDM) [11]. Specifically, the framework consists of two parts: style transfer and audio generation. In the style transfer part, a conditional mechanism is introduced to learn differ- ent types of input styles and transfer their information to the latent space for guiding the generation of target spec- trograms. This approach avoids the need for designing complex, disentangled transfer frameworks and enables many- to-many style transfer. Moreover, the transfer process takes place in latent space, greatly reducing computational costs and improving generation speed. For the audio generation part, this study proposes GuideDiff, a waveform audio gen- erator based on DMs. It compresses and encodes spectro- grams into the latent space to control and guide waveform generation, achieving fast inference speed and high-quality audio generation compared to baseline vocoders. This has practical significance for the real-time generation of high- quality audio. In summary, the main works are as follows: (1) The paper introduces a music style transfer model that is based on DM and allows for many-to-many music styles to be transferred. This model is capable of perform- ing real-time style transfer on audio, making it highly effi- cient and practical. (2) This study proposes a novel audio generation method called GuideDiff, which is based on the diffusion model. The GuideDiff method is designed to generate high-quality audio waveforms by utilizing spectrogram restoration tech- niques. (3) The experimental results show that the proposed model has good performance in both style transfer and audio qual- ity compared to the baseline model. Moreover, it can achieve real-time conversion and generate target audio on consumer- grade GPUs. In the remainder of this paper, we will organize the con- tent as follows: Section 2 presents related work; Section 3 describes the architecture of the proposed method; Sec- tion 4 evaluates the effectiveness of the proposed method through experiments; and Section 5 provides the conclu- sion of this paper.", + "main_content": "2.1 Music Style Transfer Numerous studies on musical style transfer have taken cues from models for transferring image styles. Musical style transfer can be categorized into three types: timbral style transfer, performance style transfer, and compositional style transfer. Among these, timbral-style transfer has received the most attention in recent years. This type of transfer involves altering the timbre of a musical composition in the audio domain. However, relatively little study has been done on the latter two types of musical style transfer: performance and compositional. Further study on these types of musical style transfers could lead to new and innovative ways of creating and transforming music. Researchers typically follow two different design patterns to achieve music style transfer. One involves symbolic music notation, and the other involves audio signals. For audio signals, researchers typically use time-frequency methods, which are more indirect and help reduce data complexity. They convert the abstract audio into spectrograms and use deep learning models for high-quality transfer. This method involves two deep learning models, with the first model involving the style transfer of the spectrogram of the audio and the second model involving the restoration of the generated spectrogram to real audio. Currently, researchers mainly use generative models such as CycleGAN [2], VAE [1], UNIT [12], and MusicVAE [13] for music style transfer. However, while these models have shown promising results, they also have limitations that hinder their practical application. Further research is needed to overcome these limitations and improve the effectiveness and efficiency of musical style transfer. The focus of this study is to explore a new generic music style transfer model that employs a time-frequency approach. This model is designed to enable three types of music style transfer: timbral, performance, and compositional. 2.2 Diffusion Models DM is a class of likelihood-based generative models, with its pioneering work being DDPM. Its core theoretical underpinnings are the Markov chain and Langevin dynamics. Due to its stable training and easy expansion, it has surpassed GANs [14] in image generation tasks and achieved higher sample quality. However, the sampling process is slow, and it needs to follow a Markov chain to generate a sample step by step. DDIM [15] accelerates the sampling process by iterative non-Markovian methods while keeping the training process unchanged. ADM [14] ultimately outperforms GAN-based methods through a welldesigned architecture and classifier guidance. A latent diffusion model [11] has also been proposed recently for image synthesis. This model compresses the image from pixel space to latent space for diffusion, resulting in significantly reduced computational complexity while achieving highquality image generation. However, the application of this model in the field of music generation has not been extensively studied. In this study, we propose a generic music style transfer framework based on the latent diffusion model, using spectrograms as an intermediate representation of music. In this respect, our work has something in common with riffusion [28], as both utilize Fourier transforms to process audio waveforms in order to obtain a spectrogram. This spectrogram is then diffused using a diffusion model. 2.3 Neural Vocoder Deep generative models have achieved significant success in modeling audio generation, with common methods including autoregressive models, flow-based models, and diffusion models. WaveNet [16] is an autoregressive model that generates high-fidelity audio, but its synthesis is slow, and the synthesized audio contains audible noise. WaveRNN [17] is another autoregressive model that reduces computational complexity by using sparse recurrent neural networks. Stream-based models, such as WaveFlow [18], WaveGlow [19], and FloWaveNet [20], improve the quality of audio synthesis by maximizing the likelihood of training the model. Recently, DM-based audio generation models have been proposed, such as DiffWave [21] and WaveGrad [22], which are able to generate higher-quality audio and synthesize it faster than common models. In this work, we propose a new neural vocoder called GuideDiff based on DM. This vocoder is mainly used in the style transfer model to restore high-quality audio from generated spectrograms. Moreover, its synthesis speed is several orders of magnitude faster than baseline models like WaveNet. 3. METHOD Figure 1. Piano to violin style transfer. Music style transfer is accomplished in three steps in this work, as illustrated in Figure 1. Firstly, a spectrogram is obtained from the input audio waveform using the Short Time Fourier Transform (STFT), which represents time and frequency. The phase information is discarded, and only the amplitude is processed as an image. Secondly, the transfer of musical styles is performed by completing the domain conversion on the spectrogram using a latent diffusion model. Lastly, GuideDiff is utilized to convert the transformed spectrograms into audio waveforms. The section following this introduction will focus on the second part of the music style transfer process, which involves the conversion of the input spectrogram to a target spectrogram using a latent diffusion model. The subsequent section will cover the third part of the process, which is the conversion of the target spectrogram into a highquality audio waveform using the proposed neural coder, GuideDiff. 3.1 Time-Frequency Analysis The audio signal is often more challenging to capture compared to image signals. As a result, an audio spectrogram, which provides a visual representation of the frequency content of sound, is commonly used. In Figure 2, the x-axis represents time, while the y-axis represents frequency. The color of each pixel corresponds to the frequency and volume of the audio in its corresponding rows and columns. To perform style transfer, we need to analyze the input audio in both the time and frequency domains to obtain a spectrogram. One of the most commonly used techniques in this area is the Short Time Fourier Transform (STFT), which is often discretized for computer calculations. The discrete STFT operation can be abbreviated as STFTx[n](m, \u03c9k) = \u221e X n=\u2212\u221e x[n]\u03c9[n \u2212m]e\u2212j\u03c9kn (1) Figure 2. Spectrogram. Where x[n] is the input time domain signal, m is the step size, \u03c9k is the frequency, and \u03c9 is a window function. The audio is divided into segments of 5 seconds for timefrequency analysis to make processing easier. By performing the STFT transform independently, the segmented audio is converted into a spectrogram. In this case, a Hanning window with a step size of 100 is used, and the phase information is discarded during processing because it is ambiguous and unpredictable. 3.2 Transfer Model Figure 3. Models of transfer. Figure 3 illustrates the three main components of the style transfer model: an autoencoder (AE) that compresses and restores the input and output spectrogram information in pixel space; a latent space diffusion model that is mainly used for style transfer, which incorporates a cross-attention mechanism that completes the domain transformation by transferring data from the conditional mechanism into the denoised UNet; the conditional mechanism is primarily used to convey information learned from various musical spectrograms into latent space. 3.2.1 Perceptual Compression By drawing on the work of Robin Rombach et al. [11], we introduced perceptual compression to lower the computing needs of training DM for producing high-quality spectrograms. The sampling is carried out in a low-dimensional space, which increases the DM\u2019s computing efficiency. A pre-trained self-encoder was employed for perceptual compression. This self-encoder is trained using a patchbased adversarial objective in conjunction with a perceptual loss. The blur created by relying simply on pixel space loss is effectively avoided, which enhances the reconstruction\u2019s realism. It offers a low-dimensional representation space that is analogous to the data space from a perceptual standpoint. The self-encoder consists of an encoder \u03b5 and a generator D. They are both composed of three layers of threedimensional convolution. Formally, given a sample spectrogram x \u2208RH\u00d7W \u00d73, the encoder \u03b5 encodes it into a potential representation z = \u03b5(x),where z \u2208Rh\u00d7w\u00d73. The encoder downsamples the spectrogram by a factor f = H/h = W/w and the generator D reconstructs the potential representation back into a sample \u02dc x,i.e. \u02dc x = D(z). To avoid a high degree of dissimilarity in the potential representation space, we have adopted a KL-reg regularization, introducing a slight KL penalty term. A standard learning rate is obtained at the beginning, and the effect is very close to that of a variational autoencoder (VAE). The reconstruction loss Lrec consists of pixel-level mean squared error (MSE) and perceptual-level loss. In summary, the overall training objectives for encoder \u03b5 and generator D are LAE = min \u03b5,D (Lrec(x, D(\u03b5(x)))+KLreg(x||(\u03b5(x)))) (2) 3.2.2 Latent Diffusion Models With the perceptual compression model, we can obtain an effective, low-dimensional latent space in which high frequencies and some difficult-to-perceive details are abstracted. This is effective for the extraction of musical features such as pitch, loudness, timbre, etc. Recalling DM, we propose to diffuse and denoise the spectrogram in the latent space. Given a compressed latent code z0 \u223c q(z0). DM consists of a forward diffusion process and a backward denoising process. In the forward diffusion process, we train the diffusion model by iteratively adding T steps of diffusion Gaussian noise according to a fixed noise schedule, starting from data z0 to produce a set of noisy latent variables, i.e. z1, ..., zT . q(zt|zt\u22121) = N(zt; p 1 \u2212\u03b2tzt\u22121, \u03b2tI) (3) q(z1:T |z0) = T Y t=1 q(zt|zt\u22121) (4) where \u03b21, \u03b22, ..., \u03b2T is the noise scheduling that converts the data distribution z0 into a potential zT . Ultimately, data points zT are indistinguishable from pure Gaussian noise when mixed together. The diffusion model is employed in the reverse denoising process to recover zT to z0 by p(zt\u22121|zt) = N(zt\u22121; \u00b5\u03b8(zt, t), \u03c3\u03b8(zt, t)) (5) p\u03b8(z0:T ) = p(zT ) T Y t=1 p\u03b8(zt\u22121|zt) (6) where \u03b8 is a parameterized neural network that is defined by a Markov chain. The U-Net, commonly used in image synthesis, is used here to predict \u00b5\u03b8(zt, t) and \u03c3\u03b8(zt, t). In actuality, \u03c3\u03b8 is set to a time-dependent constant that is untrained depending on a noise schedule of \u03c3\u03b8(zt, t) = \u03c3t = 1 \u2212\u00af \u03b1t\u22121 1 \u2212\u00af \u03b1t \u03b2t (7) Where \u03b1t = 1 \u2212\u03b2t,\u00af at = Qt i=1 \u03b1i, we parameterize \u00b5\u03b8 = (zt, t) by \u00b5\u03b8(zt, t) = 1 \u221a\u03b1t (zt \u2212 \u03b2t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(zt, t)) (8) Final \u03f5\u03b8(zt, t) was assessed. In practice, we use simplified training objectives. Lsimple(\u03b8) = E\u03b5(x),\u03f5\u223cN (0,1)||\u03f5\u03b8(zt, t) \u2212\u03f5||2 2 (9) where \u03f5 \u223cN(0, 1) . Since the forward process of the diffusion model is fixed, it can be efficiently obtained during training zt and the p(z) samples generated by the reverse process can be encoded once in perceptual space through the generator D into image space. Style transfer module. To model the generation of spectrograms in the latent space and to accomplish style transfer. We used a 2-dimensional convolutional layer to build the underlying U-Net capabilities, specifically a 2\u00d72 shaped convolutional layer. A cross-attention mechanism is added to augment the U-Net backbone, enabling it to generate spectrograms in the target domain conditional on the style transfer. And it ensures that style information can be shared across the potential space, which is essential for learning the style of audio and completing style transfer. 3.2.3 Conditioning Mechanisms In this module, we employ a domain-specific encoder \u03c4\u03b8 to preprocess the input conditional style spectrogram y and project the encoded y onto an intermediate representation \u03c4\u03b8(y) \u2208RM\u00d7d\u03c4 , which is then mapped to the intermediate layer of the U-Net via a cross-attention layer to enable the generation of the spectrograms according to condition y. The following equation carries out the cross-attention mechanism. Attention(Q, K, V ) = softmax(QKT \u221a d ) \u00b7 V (10) where Q = W (i) Q \u00b7 \u03c6i(zt) ,K = W (i) K \u00b7 \u03c4\u03b8(y),V = W (i) V \u00b7 \u03c4\u03b8(y). \u03c6i(zt) \u2208Rd\u00d7di \u03f5 denotes the intermediate U-Net representation that implements \u03b5\u03b8. W (i) V \u2208Rd\u00d7di \u03f5,W (i) K \u2208 Rd\u00d7di \u03f5 and W (i) Q \u2208Rd\u00d7di \u03f5 are projection matrices that are mainly used to learn and map styles from the target domain of the \u03c4\u03b8(y) representation, enabling style transfer. The objective function is rewritten as LCM(\u03b8) = E\u03b5(x),y,\u03f5\u223cN (0,1)||\u03f5 \u2212\u03f5\u03b8(zt, t, \u03c4\u03b8(y))||2 2 (11) Where \u03c4\u03b8 and \u03f5\u03b8 can be jointly optimized by means of an objective function. 3.3 Waveform Reconstruction We propose a novel encoder, called GuideDiff, to convert the spectrogram output from the model into audio. It can restore the spectrogram to generate high-quality audio. As shown in Figure 4, a 3 \u00d7 3 encoder \u03b5 = E\u03b8enc(mw) is first used to encode the spectrogram into the latent space, and then the information x from the latent space is sent as conditional information into the U-Net\u2019s cross-attention Figure 4. GuideDiff architecture. mechanism for conditioning and directing the creation of waveforms. The original waveform is then recreated by using the diffusion decoder D = D\u03b8dec(z, \u03b1, s) to decode the latent signal, where D\u03b8dec denotes the diffusion sampling method, \u03b1 denotes the noise, and s denotes the sample pace length. Target diffusion is used to train Decoder D while conditioning the latent 2D U-Net, which is repeatedly invoked during the decoding procedure. Figure 5 displays the model\u2019s primary network diagram. Where yn Figure 5. Model\u2019s primary network. denotes the n th round of noisy audio input and \u03f5\u03b8 denotes the simulated generated noise. FiLM is the characteristic linear modulation module, consisting of two 3 \u00d7 1 convolutional layers and the Leaky ReLU function. Here we condition on noise level \u221a\u00af \u03b1 and pass it to the position encoding function. Compared to the DM objective function, we can write the objective function as LGuideDiff(\u03b8) = E\u00af \u03b1,\u03b5[||\u03f5\u2212\u03f5\u03b8(\u221a\u00af \u03b1ny0+ \u221a 1 \u2212\u00af \u03b1n\u03f5, x, \u221a \u00af \u03b1)||1] (12) where \u03b1 = 1 \u2212\u03b2n,\u00af \u03b1n = Pn s=1 \u03b1s , in this case \u03b2n, is an equivariant sequence from 0 to 1. For the input spectrograms, we discarded the phase and used only the amplitude. By encoding the spectrograms in latent space, the computational load for the representation can be effectively reduced. Moreover, it enables the diffusion model to learn how to generate waveforms with true phase. The latent space obtained is used as the starting point for the next diffusion phase. The advantage of this is that our model only needs to be trained once. The latent trajectory space also allows for a large number of inference procedures to be performed without requiring retraining. Specifically, once this model is trained, it is only necessary to use a different number of iterations N in the inference process to determine the quality of the computational output. This is useful for rapidly bootstrapping the generation of high-quality raw audio. To ensure that the reduced latent space is available for latent diffusion, we apply the tanh function to the bottleneck, ensuring that the values remain within the range [-1, 1]. In summary, our overall objective function is L = LAE+Lsimple(\u03b8)+LCM(\u03b8)+LGuideDiff(\u03b8) (13) 4. EXPERIMENTS In Section 4.1, we provide details on the experimental setup, including data description and pre-processing, as well as evaluation metrics. Section 4.2 then provides a detailed description of the implementation. 4.1 Experimental Setup 4.1.1 Data Description And Preprocessing The model consumes a large amount of memory when generating an entire song at once. To mitigate this issue, we employed the Demucs model to separate the music into its constituent sources, such as vocals, bass, and drums. Furthermore, each song was divided into smaller segments, which were modeled individually and then reassembled. However, rearranging the segments was challenging, as they differed in downbeat, key, and pace. To address this, we smoothly interpolated cues and seeds in the model\u2019s latent space. In a diffusion model, the latent space is a feature vector that encompasses the entire space of possibilities that can be generated by the model. Items that are similar to each other are approached in the latent space, and each value in the latent space is decoded into a feasible output. This makes the audio sound natural. We require various types of music data to train our model to achieve music style migration. For all the experiments in this study, we used music datasets from multiple source domains collected from the web. This dataset includes over 100,000 WAV audio files of various instruments, genres, and compositional styles. The main instruments include piano, violin, guitar, and others, while the genres mainly consist of jazz, classical, and pop. The data was used for training (80%), testing (10%), and validation (10%). 4.1.2 Evalutaion Metrics The following measures were used to evaluate and analyze the model\u2019s performance: Fr\u00b4 echet Audio Distance (FAD) [23] The FAD calculates the Fr\u00b4 echet distance between the output generated audio samples and the real audio samples. The smaller the distance between the two data distributions, the more realistic the generated samples will be, which gives a reliable assessment of the difference between them. Accuracy In this research, five independent style assessment classifiers were trained in order to test the efficacy of the model style transfer. The percentage of styles accurately predicted in each song bar served as a measure of the classifiers\u2019 accuracy. Mean Opinion Score(MOS) In this work, a 5-scale mean opinion score is used to evaluate the proposed model. Where the MOS value suggests that a higher value is preferable. Subjects were asked to rate each of the three questions for each transfer version on a scale of 1 to 5. 1. Success in style transfer (ST): whether the target domain is migrated in the generated audio after transfer compared to the original audio. 2. Content preservation (CT): the extent to which the migrated-generated audio matches the original audio content. 3. Sound quality (SQ): the generated audio has high or poor sound quality. A mean score will be used when comparing it to other baseline models. GuideDiff simply evaluates the quality of the generated sound. Inception Score (IS) [24] To evaluate the level of diversity and quality of the sample generation. IS is an evaluation metric employing a ResNeXT classifier [25] trained on our dataset and a 10-dimensional logit based on a 1024dimensional feature vector. To assess the effectiveness of the proposed audio generation model. IS is calculated as IS = exp(Ex\u223cpgenKL(PF(x)||Ex\u2032\u223cpgenPF(x \u2032))) (14) Where PF(x) is a multinomial distribution and Ex\u2032\u223cpgenPF(x \u2032) is an edge label distribution. 4.1.3 Implementation Details This work uses a UNet architecture consisting of 14 layers of stacked convolution blocks and attention blocks as a combination of upsampling and downsampling for the diffusion model, which is based on the work of Robin Rombach et al [11]. For the downsampling factor, a downsampling factor of 4 was used. The same hidden size and skip connection layers were set between the layers in the UNet model. The first six layers of the UNet model use 512 \u00d7 512 input and output channels, followed by two 256\u00d7256 and 128x128 input and output channels, respectively. After that, the input and output channels are halved layer by layer. The attention mechanism is used in this work at 16 \u00d7 16,8 \u00d7 8 and 4 \u00d7 4 resolutions. A ResBlock is also added to the UNet module, which receives two inputs: the image x and the embedding corresponding to the timestep. Two linear layers and the time emb layer make up the time-step embedding layer. Our compression ratio for the latent space is 64. The audio samples were sampled at a frequency of 16000 Hz, with a channel size of 2, and an amplitude of -10 dB. The model was trained using the Adam optimizer with 500k steps, a learning rate of 5e-5, and a batch size of 100. The batch size for GuideDiff was set to 256. Approximately 1M steps were trained using the Adam optimizer. The experiments in this research generated audio in less than 5 seconds, which can be regarded as real-time generation, and were trained on 3 NVIDA RTX3090Ti, a GPU capable of running 50 stable diffusion steps. 4.2 Experimental Analysis Four primary musical style transfer tasks were taken into consideration in the trials: 1. Stylistic transfer of instrument timbres. Mainly consider piano to guitar (p2g) and piano to violin transitions (p2v). Each transition will do a bilateral transformation. 2. transfer of musical genres. Genre conversions from jazz to pop (j2p) and jazz to class (j2c) are mainly considered. Each transition will do a bilateral transformation. 3. Music composition style conversion. Beethoven to Chopin (B2C), and Chopin to Beethoven (C2B) conversions are mainly considered. 4. Many-to-many style conversions. Conversion of classical piano pieces played mainly by Beethoven to jazz violin in the Chopin style (Bcp2Cjv). Figure 6. Style transitions for various tasks. The spectrograms of our model\u2019s inputs and outputs for the various tasks are shown in Figure 6. From the plots, it is clear that the target domain is shifted while the contents are kept intact. 4.2.1 Style Conversion Evaluation This study evaluates the proposed model through four different style transfer challenges. Subjective (MOS) and objective (FAD, accuracy) evaluations are used to compare the style conversions. Each score has its own limitations. Subjective measurements evaluate three main aspects of the model: the success of style transfer (ST), content preservation (CP), and sound quality (SQ). A 5-point scale is used for evaluation. Objective evaluations use FAD to measure individual aspects of the conversion, and accuracy is used to evaluate the accuracy of style transfer. Subjective evaluation Mean opinion scores (MOS) were collected from 200 testers for the listening test. These testers included both music lovers and non-musicians. In each mid-round score, testers first listened to the original audio clip and then to the style-shifted version. The results indicate that our model performs the best in the piano2violin task, which may be attributed to the relatively simple timbre conversion of a single instrument. Task ST CP SQ piano2Violin 4.27 4.13 4.3 piano2guitar 4.02 4.05 4.2 jazz2pop 3.95 3.8 4.0 jazz2class 3.96 4.0 4.12 Beethoven2Chopin 4.05 4.1 4.15 Bcp2Cjv 4.1 4.23 4.3 Table 1. 5-scale MOS with style Transfer. Our model\u2019s performance is slightly lower in the jazz2pop and jazz2class tasks, but it still achieves scores close to 4 in terms of successful style transfer and content retention. This suggests that our model is relatively successful in genre conversion. Additionally, the high scores for sound quality in all six tasks indicate that the proposed model is capable of generating high-quality music. Objective review Measures how well the converted version matches the original version and the accuracy of the style transfer. Task FAD\u2193 Accuracy\u2191 piano2Violin 7.52 94.5% piano2guitar 6.95 93.4% jazz2pop 11.76 86.2% jazz2class 10.55 87.2% Beethoven2Chopin 6.19 95.3% Bcp2Cjv 6.07 95.7% Table 2. FAD&Accuracy for the tasks. The accuracy of style transfer between the audio produced by the specified task and the original audio is presented in Table 2 along with the results of FAD calculations. The results indicate good performance in terms of the timbre transfer of instruments and the transfer of compositional styles. However, the performance is poor in terms of genre transfer, which is consistent with the results of the subjective evaluation. This is an area that requires improvement in future research. 4.2.2 Comparison With Other Models Our model was compared against a number of baseline models, including CycleGAN [2], UNIT [12], musicVAE [13], and autoencoder [26], in order to show the validity of the model described in this work. Table 3 presents the outcomes. Note that these baseline models for style transfer are all one-to-one mappings. In this work, the input transfers use the same music clip, and they are trained independently. Only the spectrogram form is considered for the intermediate representation of the music. The same model, GuideDiff, is used for the generation of the waveform. The comparison of the baseline models indicates that CycleGAN performs the best in terms of genre migration, which may be related to the fact that cycle consistency loss is taken into account in its direct matching of target domains at the feature level. However, our model achieved a result that is only about 0.1 points lower than the best. Additionally, our model outperforms the other baseline models in terms of the migration of musical instrument timbre and compositional style. Therefore, it can be conModel Task p2v p2g j2p j2c B2C CycleGAN 3.98 3.96 4.17 4.12 4.0 UNIT 3.7 3.75 3.5 3.62 3.71 musicVAE 3.86 3.91 3.7 3.68 3.89 autoencoder 3.5 3.56 3.4 3.45 3.52 ours 4.23 4.09 3.91 4.02 4.07 Table 3. MOS with the baseline comparison model. cluded that the proposed model demonstrates superior performance in terms of flexible many-to-many musical style migration compared to the other baseline models. 4.2.3 Evaluation of the audio generating model To demonstrate the performance and high-quality audio generation capabilities of GuideDiff, the proposed audio generation model, examples are presented in this section. Comparisons are made between the proposed model and WaveNet [16], WaveRNN [17], and WaveGAN [27]. All models use the same training set and are tested using the same spectrograms to generate audio. Both subjective and objective evaluation techniques are used to assess the quality of the generated audio. Testers will rate the audio quality on a scale of 1 to 5 for subjective evaluation. The results are presented in Table 4. Model MOS(\u2191) IS(\u2191) WaveNet 3.02 2.84 WaveGAN 3.82 4.53 WaveRNN 4.40 5.38 GuideDiff 4.41 5.40 Table 4. Comparison of audio generation models. The comparison demonstrates that our model performs similarly to the autoregressive model WaveRNN and surpasses the other baseline models. This suggests that the proposed model has excellent performance in producing high-quality audio. 5. CONCLUSIONS In this work, we have designed an efficient DM-based framework for music style transfer. A latent layer was introduced into the framework, which effectively reduces the dimensionality of the data. A cross-attention mechanism is added to the latent layer. The transfer of styles is achieved by adding seed conditions to guide and complete the generation of transformations in the target domain. As for the generation of audio, this study proposes GuideDiff, a DMbased method for generating waveform audio. The method compresses the spectrogram into latent space via an encoder and transfers it to the U-Net. The potential signal is then decoded back into the waveform using diffusion guidance. The experimental results demonstrate that the proposed model can achieve many-to-many style migration and generate high-quality music in comparison to previous approaches. Additionally, the model is capable of performing style migration and generating high-quality audio in real-time on a consumer-grade GPU. Given the excellent performance of this model, future work will utilize it to explore textgenerated music. 6." + } + ] +} \ No newline at end of file