| { |
| "url": "http://arxiv.org/abs/2404.16306v1", |
| "title": "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models", |
| "abstract": "Text-conditioned image-to-video generation (TI2V) aims to synthesize a\nrealistic video starting from a given image (e.g., a woman's photo) and a text\ndescription (e.g., \"a woman is drinking water.\"). Existing TI2V frameworks\noften require costly training on video-text datasets and specific model designs\nfor text and image conditioning. In this paper, we propose TI2V-Zero, a\nzero-shot, tuning-free method that empowers a pretrained text-to-video (T2V)\ndiffusion model to be conditioned on a provided image, enabling TI2V generation\nwithout any optimization, fine-tuning, or introducing external modules. Our\napproach leverages a pretrained T2V diffusion foundation model as the\ngenerative prior. To guide video generation with the additional image input, we\npropose a \"repeat-and-slide\" strategy that modulates the reverse denoising\nprocess, allowing the frozen diffusion model to synthesize a video\nframe-by-frame starting from the provided image. To ensure temporal continuity,\nwe employ a DDPM inversion strategy to initialize Gaussian noise for each newly\nsynthesized frame and a resampling technique to help preserve visual details.\nWe conduct comprehensive experiments on both domain-specific and open-domain\ndatasets, where TI2V-Zero consistently outperforms a recent open-domain TI2V\nmodel. Furthermore, we show that TI2V-Zero can seamlessly extend to other tasks\nsuch as video infilling and prediction when provided with more images. Its\nautoregressive design also supports long video generation.", |
| "authors": "Haomiao Ni, Bernhard Egger, Suhas Lohit, Anoop Cherian, Ye Wang, Toshiaki Koike-Akino, Sharon X. Huang, Tim K. Marks", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models", |
| "main_content": "Introduction Image-to-video (I2V) generation is an appealing topic with various applications, including artistic creation, entertainment, and data augmentation for machine learning [39]. Given a single image x0 and a text prompt y, textconditioned image-to-video (TI2V) generation aims to syn*Work done during an internship at MERL. \u201cA man with the expression of slight happiness on his face.\u201d \u201cA person is drumming.\u201d \u201cA serene mountain cabin covered in a fresh blanket of snow.\u201d Figure 1. Examples of generated video frames using our proposed TI2V-Zero. The given first image x0 is highlighted with the red box, and the text condition y is shown under each row of the video. The remaining columns show the 6th, 11th, and 16th frames of the generated output videos. Each generated video has 16 frames with a resolution of 256 \u00d7 256. thesize M new frames to yield a realistic video, \u02c6 x = \u27e8x0, \u02c6 x1, . . . , \u02c6 xM\u27e9, starting from the given frame x0 and satisfying the text description y. Current TI2V generation methods [59, 63, 70] typically rely on computationallyheavy training on video-text datasets and specific architecture designs to enable text and image conditioning. Some [12, 25] are constrained to specific domains due to the lack of training with large-scale open-domain datasets. Other approaches, such as [14, 67], utilize pretrained foundation models to reduce training costs, but they still need to train additional modules using video data. In this paper, we propose TI2V-Zero, which achieves zero-shot TI2V generation using only an open-domain pretrained text-to-video (T2V) latent diffusion model [60]. Here \u201czero-shot\u201d means that when using the diffusion arXiv:2404.16306v1 [cs.CV] 25 Apr 2024 \fmodel (DM) that was trained only for text conditioning, our framework enables image conditioning without any optimization, fine-tuning, or introduction of additional modules. Specifically, we guide the generation process by incorporating the provided image x0 into the output latent code at each reverse denoising step. To ensure that the temporal attention layers of the pretrained DM focus on information from the given image, we propose a \u201crepeat-and-slide\u201d strategy to synthesize the video in a frame-by-frame manner, rather than directly generating the entire video volume. Notably, TI2V-Zero is not trained for the specific domain of the provided image, thus allowing the model to generalize to any image during inference. Additionally, its autoregressive generation makes the synthesis of long videos possible. While the standard denoising sampling process starting with randomly initialized Gaussian noise can produce matching semantics, it often results in temporally inconsistent videos. Therefore, we introduce an inversion strategy based on the DDPM [20] forward process, to provide a more suitable initial noise for generating each new frame. We also apply a resampling technique [33] in the video DM to help preserve the generated visual details. Our approach ensures that the network maintains temporal consistency, generating visually convincing videos conditioned on the given starting image (see Fig. 1). We conduct extensive experiments on MUG [1], UCF101 [56], and a new open-domain dataset. In these experiments, TI2V-Zero consistently performs well, outperforming a state-of-the-art model [67] that was based on a video diffusion foundation model [8] and was specifically trained to enable open-domain TI2V generation. 2. Related Work 2.1. Conditional Image-to-Video Generation Conditional video generation aims to synthesize videos guided by user-provided signals. It can be classified according to which type(s) of conditions are given, such as textto-video (T2V) generation [5, 16, 21, 23, 31, 65], video-tovideo (V2V) generation [7, 38, 40, 45, 61, 64], and imageto-video (I2V) generation [4, 10, 25, 34, 39, 69]. Here we discuss previous text-conditioned image-to-video (TI2V) generation methods [12, 14, 22, 44, 63, 70]. Hu et al. [25] introduced MAGE, a TI2V generator that integrates a motion anchor structure to store appearance-motion-aligned representations through three-dimensional axial transformers. Yin et al. [70] proposed DragNUWA, a diffusionbased model capable of generating videos controlled by text, image, and trajectory information with three modules including a trajectory sampler, a multi-scale fusion, and an adaptive training strategy. However, these TI2V frameworks require computationally expensive training on videotext datasets and a particular model design to support textand-image-conditioned training. In contrast, our proposed TI2V-Zero leverages a pretrained T2V diffusion model to achieve zero-shot TI2V generation without additional optimization or fine-tuning, making it suitable for a wide range of applications. 2.2. Adaptation of Diffusion Foundation Models Due to the recent successful application of diffusion models (DM) [20, 42, 47, 54, 55] to both image and video generation, visual diffusion foundation models have gained prominence. These include text-to-image (T2I) models such as Imagen [50] and Stable Diffusion [47], as well as textto-video (T2V) models such as ModelScopeT2V [60] and VideoCrafter1 [8]. These models are trained with largescale open-domain datasets, often including LAION-400M [52] and WebVid-10M [2]. They have shown immense potential for adapting their acquired knowledge base to address a wide range of downstream tasks, thereby reducing or eliminating the need for extensive labeled data. For example, previous works have explored the application of large T2I models to personalized image generation [13, 49], image editing [17, 33, 35\u201337], image segmentation [3, 68], video editing [45, 62], and video generation [14, 27, 53, 66]. In contrast to T2I models, there are fewer works on the adaptation of large-scale T2V models. Xing et al. [67] proposed DynamicCrafter for open-domain TI2V generation by adapting a T2V foundation model [8]. To control the generative process, they first employed a learnable image encoding network to project the given image into a textaligned image embedding space. Subsequently, they utilized dual cross-attention layers to fuse text and image information and also concatenated the image with the initial noise to provide the video DM with more precise image details. In contrast, in this paper we explore how to inject the provided image to guide the DM sampling process based solely on the pretrained T2V model itself, with no additional training for the new TI2V task. 3. Methodology Given one starting image x0 and text y, let x = \u27e8x0, x1, . . . , xM\u27e9represent a real video corresponding to text y. The objective of text-conditioned image-tovideo (TI2V) generation is to synthesize a video \u02c6 x = \u27e8x0, \u02c6 x1, . . . , \u02c6 xM\u27e9, such that the conditional distribution of \u02c6 x given x0 and y is identical to the conditional distribution of x given x0 and y, i.e., p(\u02c6 x|x0, y) = p(x|x0, y). Our proposed TI2V-Zero can be built on a pretrained T2V diffusion model with a 3D-UNet-based denoising network. Here we choose ModelScopeT2V [60] as backbone due to its promising open-domain T2V generation ability. Below, we first introduce preliminaries about diffusion models, then introduce the architecture of the pretrained T2V model, and finally present the details of our TI2V-Zero. \fDiffusion Process DDPM Inversion \u2026 ! \ud835\udc33! \u0302 \ud835\udc67! \" \u0302 \ud835\udc67! #$% \u0302 \ud835\udc67! # \ud835\udc2c\" \u222a \u2026 ! \ud835\udc33& \u0302 \ud835\udc67& \" \u0302 \ud835\udc67& #$% \u0302 \ud835\udc67& # \ud835\udc2c& Replace \u2026 ! \ud835\udc33\" \u0302 \ud835\udc67\" \" \u0302 \ud835\udc67\" #$% \u0302 \ud835\udc67\" # \u2026 \ud835\udc60\" % \ud835\udc60\" #$% \ud835\udc60\" \" \u0302 \ud835\udc67\" # Slide add \ud835\udc61 step noise \ud835\udc9f ! \ud835\udc65'(% \u2026 \u2026 \u2026 ! \ud835\udc33&$% \u0302 \ud835\udc67&$% \" \u0302 \ud835\udc67&$% #$% \u0302 \ud835\udc67&$% # \ud835\udc2c&$% add (\ud835\udc61\u22121) step noise U-Net \ud835\udf16) U-Net \ud835\udf16) U-Net \ud835\udf16) Replace \ud835\udc56> 0 \ud835\udc56= 0 Construct \ud835\udc2c\" \ud835\udc60\" % \ud835\udc60\" #$% \ud835\udc60\" \" \ud835\udc60\" #$% \ud835\udc2c\" \u2026 \ud835\udc67\" \u2107 \ud835\udc65\" Repeat \ud835\udc3e times Reverse Process Using Pretrained Denoising U-Net Resample Resample \ud835\udc66 Resample \ud835\udc66 \ud835\udc66 Figure 2. Illustration of the process of applying TI2V-Zero to generate the new frame \u02c6 xi+1, given the starting image x0 and text y. TI2VZero is built upon a frozen pretrained T2V diffusion model, including frame encoder E, frame decoder D, and the denoising U-Net \u03f5\u03b8. At the beginning of generation (i = 0), we encode x0 as z0 and repeat it K times to form the queue s0. We then apply DDPM-based inversion to s0 to produce the initial Gaussian noise \u02c6 zT . Subsequently, in each reverse denoising step using U-Net \u03f5\u03b8, we keep replacing the first K frames of \u02c6 zt with the noisy latent code st derived from s0. Resampling is also applied within each step to improve motion coherence. We finally decode the final frame of the clean latent code \u02c6 z0 as the new synthesized frame \u02c6 xi+1. To compute the new s0 for the next iteration of generation (i > 0), we perform a sliding operation by dequeuing s0 0 and enqueuing \u02c6 zK 0 within s0. 3.1. Preliminaries: Diffusion Models Diffusion Models (DM) [20, 54, 55] are probabilistic models designed to learn a data distribution. Here we introduce the fundamental concepts of Denoising Diffusion Probabilistic Models (DDPM). Given a sample from the data distribution z0 \u223cq(z0), the forward diffusion process of a DM produces a Markov chain z1, . . . , zT by iteratively adding Gaussian noise to z0 according to a variance schedule \u03b21, . . . , \u03b2T , that is: q(zt|zt\u22121) = N(zt; p 1 \u2212\u03b2tzt\u22121, \u03b2tI) , (1) where variances \u03b2t are constant. When the \u03b2t are small, the posterior q(zt\u22121|zt) can be well approximated by a diagonal Gaussian [41, 54]. Furthermore, if the length of the chain, denoted by T, is sufficiently large, zT can be well approximated by a standard Gaussian distribution N(0, I). These suggest that the true posterior q(zt\u22121|zt) can be estimated by p\u03b8(zt\u22121|zt) defined as: p\u03b8(zt\u22121|zt) = N(zt\u22121; \u00b5\u03b8(zt), \u03c32 t I) , (2) where variances \u03c3t are also constants. The reverse denoising process in the DM (also termed sampling) then generates samples z0 \u223cp\u03b8(z0) by starting with Gaussian noise zT \u223cN(0, I) and gradually reducing noise in a Markov chain zT \u22121, zT \u22122, . . . , z0 using a learned p\u03b8(zt\u22121|zt). To learn p\u03b8(zt\u22121|zt), Gaussian noise \u03f5 is first added to z0 to generate samples zt. Utilizing the independence property of the noise added at each forward step in Eq. (1), we can calculate the total noise variance as \u00af \u03b1t = Qt i=0(1\u2212\u03b2i) and transform z0 to zt in a single step: q(zt|z0) = N(zt; \u221a\u00af \u03b1tz0, (1 \u2212\u00af \u03b1t)I) . (3) Then a model \u03f5\u03b8 is trained to predict \u03f5 using the following mean-squared error loss: L = Et\u223cU(1,T ),z0\u223cq(z0),\u03f5\u223cN (0,I) \u0002 ||\u03f5 \u2212\u03f5\u03b8(zt, t)||2\u0003 , (4) where diffusion step t is uniformly sampled from {1, . . . , T}. Then \u00b5\u03b8(zt) in Eq. (2) can be derived from \u03f5\u03b8(zt, t) to model p\u03b8(zt\u22121|zt) [20]. The denoising model \u03f5\u03b8 is implemented using a time-conditioned UNet [48] with residual blocks [15] and self-attention layers [58]. Diffusion step t is specified to \u03f5\u03b8 by the sinusoidal position embedding [58]. Conditional generation that samples z0 \u223cp\u03b8(z0|y) can be achieved by learning a y-conditioned model \u03f5\u03b8(zt, t, y) [41, 47] with classifierfree guidance [19]. During training, the condition y in \u03f5\u03b8(zt, t, y) is replaced by a null label \u2205with a fixed probability. When sampling, the output is generated as follows: \u02c6 \u03f5\u03b8(zt, t, y) = \u03f5\u03b8(zt, t, \u2205) + g \u00b7 (\u03f5\u03b8(zt, t, y) \u2212\u03f5\u03b8(zt, t, \u2205)) , (5) where g is the guidance scale. 3.2. Architecture of Pretrained T2V Model TI2V-Zero can be built upon a pretrained T2V diffusion model with a 3D-UNet-based denoising network. Here we choose ModelScopeT2V [60] as the pretrained model (denoted M). We now describe this T2V model in detail. Structure Overview. Given a text prompt y, the T2V model M synthesizes a video \u02c6 x = \u27e8\u02c6 x0, \u02c6 x1, . . . , \u02c6 xK\u27e9with a pre-defined video of length (K +1) using a latent video diffusion model. Similar to Latent Diffusion Models (LDM) [47], M incorporates a frame auto-encoder [11, 28] for the conversion of data between pixel space X and latent space Z through its encoder E and decoder D. Given the real video x = \u27e8x0, x1, . . . , xK\u27e9, M first utilizes the frame encoder E to encode the video x as z = \u27e8z0, z1, . . . , zK\u27e9. Here the sizes of pixel frame x and latent frame z are Hx \u00d7 Wx \u00d7 3 and Hz \u00d7 Wz \u00d7 Cz, respectively. To be consistent with the notation used for the DM, we denote the \fAlgorithm 1 Generation using our TI2V-Zero approach. Input: The starting frame x0; The text prompt y; The pretrained T2V Model M for generating (K + 1)-frame videos, including frame encoder E and frame decoder D, and the DM denoising networks \u03f5\u03b8; The iteration number U for resampling; The parameter M to control the length of the output video. Output: A synthesized video \u02c6 x with (M + 1) frames. 1: z0 \u2190E(x0) // Encode x0 2: s0 \u2190\u27e8z0, z0, \u00b7 \u00b7 \u00b7 , z0\u27e9 // Repeat z0 for K times 3: \u02c6 x \u2190\u27e8x0\u27e9 4: for i = 1, 2, \u00b7 \u00b7 \u00b7 , M do // Generate one new frame \u02c6 xi 5: sT \u223cN(\u221a\u00af \u03b1T s0, (1 \u2212\u00af \u03b1T )I) // DDPM Inversion 6: \u02c6 zK T \u223cN(\u221a\u00af \u03b1T sK\u22121 0 , (1 \u2212\u00af \u03b1T )I) 7: \u02c6 zT \u2190sT \u222a\u02c6 zK T // Initialize \u02c6 zT 8: for t = T \u22121, \u00b7 \u00b7 \u00b7 , 2, 1 do 9: st \u223cN(\u221a\u00af \u03b1ts0, (1 \u2212\u00af \u03b1t)I) 10: for u = 1, 2, \u00b7 \u00b7 \u00b7 , U do 11: \u27e8\u02c6 z0 t , \u02c6 z1 t , \u00b7 \u00b7 \u00b7 , \u02c6 zK\u22121 t \u27e9\u2190st // Replace 12: \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) 13: if u < U and t > 1 then 14: \u02c6 zt \u223cN(\u221a1 \u2212\u03b2t\u02c6 zt\u22121, \u03b2tI) // Resample 15: end if 16: end for 17: end for 18: s0 \u2190\u27e8s1 0, s2 0, \u00b7 \u00b7 \u00b7 , sK\u22121 0 \u27e9\u222a\u02c6 zK 0 // Slide 19: \u02c6 xi \u2190D(\u02c6 zK 0 ) // Decode \u02c6 zK 0 20: \u02c6 x \u2190\u02c6 x \u222a\u02c6 xi 21: end for 22: return \u02c6 x clean video latent z = z0 = \u27e8z0 0, z1 0, . . . , zK 0 \u27e9. M then learns a DM on the latent space Z through a 3D denoising U-Net \u03f5\u03b8 [9]. Let zt = \u27e8z0 t , z1 t , . . . , zK t \u27e9represent the latent sequence that results from adding noise over t steps to the original latent sequence z0. When training, the forward diffusion process of a DM transforms the initial latent sequence z0 into zT by iteratively adding Gaussian noise \u03f5 for T steps. During inference, denoising U-Net \u03f5\u03b8 predicts the added noise at each step, enabling the generation of the clean latent sequence \u02c6 z0 = \u27e8\u02c6 z0 0, \u02c6 z1 0, . . . , \u02c6 zK 0 \u27e9starting from randomly sampled Gaussian noise zT \u223cN(0, I). Text Conditioning Mechanism. M employs a crossattention mechanism [47] to incorporate text information into the generative process as guidance. Specifically, M uses a pretrained CLIP model [46] to encode the prompt y as the text embedding e. The embedding e is later used as the key and value in the multi-head attention layer within the spatial attention blocks, thus enabling the integration of text features with the intermediate U-Net features in \u03f5\u03b8. Denoising U-Net. The denoising U-Net \u03f5\u03b8 includes four key building blocks: the initial block, the downsampling block, the spatio-temporal block, and the upsampling block. The initial block transfers the input into the embedding Ground Truth Video Time TI2V Generation (Replacing) \u2718 Video Infilling (Replacing) \u2718 TI2V Generation (TI2V-Zero) \u2713 Video Infilling (TI2V-Zero) \u2713 Single-Frame Prediction (Replacing) \u2713 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u201cA person is riding horse.\u201d Figure 3. Illustration of the motivation behind our framework. We explore the application of a replacing-based baseline approach (rows 2\u20134, labeled \u201cReplacing\u201d) and our TI2V-Zero (rows 5\u20136, labeled \u201cTI2V-Zero\u201d) in various video generation tasks. The given real frames for each task are highlighted by red boxes and the text input is shown under the block. The replacing-based approach is only effective at predicting a single frame when all the other frames in the video are provided, while TI2V-Zero generates temporally coherent videos for both the TI2V and video infilling tasks. space, while the downsampling and upsampling blocks are responsible for spatially downsampling and upsampling the feature maps. The spatio-temporal block is designed to capture spatial and temporal dependencies in the latent space, which comprises 2D spatial convolution, 1D temporal convolution, 2D spatial attention, and 1D temporal attention. 3.3. Our Framework Leveraging the pretrained T2V foundation model M, we first propose a straightforward replacing-based baseline for adapting M to TI2V generation. We then analyze the possible reasons why it fails and introduce our TI2V-Zero framework, which includes a repeat-and-slide strategy, DDPMbased inversion, and resampling. Figure 2 and Algorithm 1 demonstrate the inference process of TI2V-Zero. Replacing-based Baseline. We assume that the pretrained model M is designed to generate the video with a fixed length of (K + 1). So we first consider synthesizing videos with that same length (K + 1), i.e., M = K. Since the DM process operates within the latent space Z, we use the encoder E to map the given starting frame x0 into the latent representation z0. Additionally, we denote z0 = z0 0 to specify that the latent is clean and corresponds \fto diffusion step 0 of the DM. Note that each reverse denoising step in Eq. (2) from \u02c6 zt to \u02c6 zt\u22121 depends solely on \u02c6 zt = \u27e8\u02c6 z0 t , \u02c6 z1 t , . . . , \u02c6 zK t \u27e9. To ensure that the first frame of the final synthesized clean video latent \u02c6 z0 = \u27e8\u02c6 z0 0, \u02c6 z1 0, . . . , \u02c6 zK 0 \u27e9 at step 0 matches the provided image latent, i.e., \u02c6 z0 0 = z0 0, we can modify the first generated latent \u02c6 z0 t of \u02c6 zt at each reverse step, as long as the signal-to-noise ratio of each frame latent in \u02c6 zt remains consistent. Using Eq. (3), we can add t steps of noise to the provided image latent z0 0, allowing us to sample z0 t through a single-step calculation. By replacing the first generated latent \u02c6 z0 t with the noisy image latent z0 t at each reverse denoising step, we might expect that the video generation process can be guided by z0 0 with the following expressions defined for each reverse step: z0 t \u223cN( \u221a \u00af \u03b1tz0 0, (1 \u2212\u00af \u03b1t)I) , (6a) \u02c6 z0 t \u2190z0 t , (6b) \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) . (6c) Specifically, in each reverse step from \u02c6 zt to \u02c6 zt\u22121, as shown in Eq. (6a), we first compute the noisy latent z0 t by adding Gaussian noise to the given image latent z0 0 over t steps. Then, we replace the first latent \u02c6 z0 t of \u02c6 zt with z0 t in Eq. (6b) to incorporate the provided image into the generation process. Finally, in Eq. (6c), we pass \u02c6 zt through the denoising network to generate \u02c6 zt\u22121, where the text y is integrated by classifier-free guidance (Eq. (5)). After T iterations, the final clean latent \u02c6 z0 at diffusion step 0 can be mapped back into the image space X using the decoder D. Using this replacing-based baseline, we might expect that the temporal attention layers in \u03f5\u03b8 can utilize the context provided by the first frame latent \u02c6 z0 t to generate the subsequent frame latents in a manner that harmonizes with \u02c6 z0 t . However, as shown in Fig. 3, row 2, this replacing-based approach fails to produce a video that is temporally consistent with the first image. The generated frames are consistent with each other, but not with the provided first frame. To analyze possible reasons for failure, we apply this baseline to a simpler video infilling task, where every other frame is provided and the model needs to predict the interspersed frames. In this case, the baseline replaces the generated frame latents at positions corresponding to real frames with noisy provided-frame latents in each reverse step. The resulting video, in Fig. 3, row 3, looks like a combination of two independent videos: the generated (even) frames are consistent with each other but not with the provided (odd) frames. We speculate that this may result from the intrinsic dissimilarity between frame latents derived from the given real images and those sampled from \u03f5\u03b8. Thus, the temporal attention values between frame latents sampled in the same way (both from the given images or both from \u03f5\u03b8) will be higher, while the attention values between frame latents sampled in different ways (one from the given image and the other from \u03f5\u03b8) will be lower. Therefore, the temporal attention layers of M tend to utilize the information from latents Ground Truth w/o Inversion DDIM=10 Resample=0 w/ Inversion DDIM=10 Resample=0 w/ Inversion DDIM=50 Resample=0 w/ Inversion DDIM=10 Resample=2 w/ Inversion DDIM=10 Resample=4 \u201cA woman with the expression of slight sadness on her face.\u201d Figure 4. Qualitative ablation study comparing different sampling strategies for our TI2V-Zero on MUG. The first image \u02c6 x0 is highlighted with the red box and text y is shown under the block. The 1st, 6th, 11th, and 16th frames of the videos are shown in each column. The terms Inversion, DDIM, and Resample denote the application of DDPM inversion, the steps using DDIM sampling, and the iteration number using resampling, respectively. produced by \u03f5\u03b8 to synthesize new frames at each reverse step, ignoring the provided frames. We further simplify the task to single-frame prediction, where the model only needs to predict a single frame when all the other frames in the video are given. In this setting, all the frame latents except for the final frame are replaced by noisy provided-frame latents in each reverse step. Thus, temporal attention layers can only use information from the real frames. In this case, Fig. 3, row 4, shows that the baseline can now generate a final frame that is consistent with the previous frames. Repeat-and-Slide Strategy. Inspired by the observation in Fig. 3, to guarantee that the temporal attention layers of M depend solely on the given image, we make two major changes to the proposed replacing-based baseline: (1) instead of using M to directly synthesize the entire (K + 1)-frame video, we switch to a frame-by-frame generation approach, i.e., we generate only one new frame latent in each complete DM sampling process; (2) for each sampling process generating the new frame latent, we ensure that only one frame latent is produced from \u03f5\u03b8, while the other K frame latents are derived from the given real image and previously synthesized frames, thereby forcing temporal attention layers to only use the information from these frame latents. Specifically, we construct a queue of \fK frame latents, denoted as s0 = \u27e8s0 0, s1 0, \u00b7 \u00b7 \u00b7 , sK\u22121 0 \u27e9. We also define st = \u27e8s0 t, s1 t, \u00b7 \u00b7 \u00b7 , sK\u22121 t \u27e9, which is obtained by adding t steps of Gaussian noise to the clean s0. Similar to our replacing-based baseline in the single-frame prediction task, in each reverse step from \u02c6 zt to \u02c6 zt\u22121, we replace the first K frame latents in \u02c6 zt by st. Consequently, the temporal attention layers have to utilize information from s0 to synthesize the new frame\u2019s latent, \u02c6 zK 0 . Considering that only one starting image latent z0 is provided, we propose a \u201crepeat-and-slide\u201d strategy to construct s0. At the beginning of video generation, we repeat z0 for K frames to form s0, and gradually perform a sliding operation within the queue s0 by dequeuing the first frame latent s0 0 and enqueuing the newly generated latent \u02c6 zK 0 after each complete DM sampling process. Note that though the initial s0 is created by repeating z0, the noise added to get st is different for each frame\u2019s latent in st, thus ensuring diversity. The following expressions define one reverse step in the DM sampling process: st \u223cN( \u221a \u00af \u03b1ts0, (1 \u2212\u00af \u03b1t)I) , (7a) \u27e8\u02c6 z0 t , \u02c6 z1 t , \u00b7 \u00b7 \u00b7 , \u02c6 zK\u22121 t \u27e9\u2190st , (7b) \u02c6 zt\u22121 \u223cN(\u00b5\u03b8(\u02c6 zt, y), \u03c32 t I) . (7c) Specifically, in each reverse denoising step from \u02c6 zt to \u02c6 zt\u22121, we first add t steps of Gaussian noise to the queue s0 to yield st in Eq. (7a). Subsequently, we replace the previous K frames of \u02c6 zt with st in Eq. (7b) and input \u02c6 zt to the denoising network to produce the less noisy latent \u02c6 zt\u22121 (Eq. (7c)). With the repeat-and-slide strategy, model M is tasked with predicting only one new frame, while the preceding K frames are incorporated into the reverse process to ensure that the temporal attention layers depend solely on information derived from the provided image. DDPM-based Inversion. Though the DM sampling process starting with randomly sampled Gaussian noise produces matching semantics, the generated video is often temporally inconsistent (Fig. 4, row 2). To provide initial noise that can produce more temporally consistent results, we introduce an inversion strategy based on the DDPM [20] forward process when generating the new frame latent. Specifically, at the beginning of each DM sampling process to synthesize the new frame latent \u02c6 zK 0 , instead of starting with the \u02c6 zT randomly sampled from N(0, I), we add T full steps of Gaussian noise to s0 to obtain sT using Eq. (3). Note that \u02c6 z has K + 1 frames, while s has K frames. We then use sT to initialize the first K frames of \u02c6 zT . We copy the last frame sK\u22121 T of sT to initialize the final frame \u02c6 zK T , as the (K \u22121)th frame is the closest to the Kth frame. Resampling. Similar to [24, 33], we further apply a resampling technique, which was initially designed for the image inpainting task, to the video DM to enhance motion coherence. Particularly, after performing a one-step denoising operation in the reversed process, we add one-step noise again to revert the latent. This procedure is repeated mulInversion DDIM Resample FVD\u2193 sFVD\u2193 tFVD\u2193 \u2717 10 0 1656.37 2074.77\u00b1411.74 1798.05\u00b1235.34 \u2713 10 0 339.89 443.97\u00b1139.10 405.22\u00b161.58 \u2713 50 0 463.55 581.32\u00b1234.09 535.06\u00b185.27 \u2713 10 2 207.62 299.14\u00b187.24 278.73\u00b147.84 \u2713 10 4 180.09 267.17\u00b174.72 252.77\u00b139.02 Table 1. Quantitative ablation study comparing different sampling strategies for proposed TI2V-Zero on the MUG dataset. Inversion, DDIM, and Resample denote the application of DDPM-based inversion, the steps using DDIM sampling, and the iteration number using resampling, respectively. Distributions for Comparison FVD\u2193 tFVD\u2193 TI2V-Zero-Fake vs. ModelScopeT2V 366.41 921.31\u00b1251.85 TI2V-Zero-Real vs. Real Videos 477.19 1306.75\u00b1271.82 ModelScopeT2V vs. Real Videos 985.82 2264.08\u00b1501.28 TI2V-Zero-Fake vs. Real Videos 937.11 2177.70\u00b1436.71 Table 2. Result analysis of TI2V-Zero starting from the real (i.e., TI2V-Zero-Real) or synthesized frames (i.e., TI2V-Zero-Fake) on the UCF101 dataset. tiple times for each diffusion step, ensuring harmonization between the predicted and conditioning frame latents (see Algorithm 1 for details). 4. Experiments 4.1. Datasets and Metrics We conduct comprehensive experiments on three datasets. More details about datasets, such as selected subjects and text prompts, can be found in our Supplementary Materials. MUG facial expression dataset [1] contains 1,009 videos of 52 subjects performing 7 different expressions. We include this dataset to evaluate the performance of models in scenarios with small motion and a simple, unchanged background. To simplify the experiments, we randomly select 5 male and 5 female subjects, and 4 expressions. We use the text prompt templates like \u201cA woman with the expression of slight {label} on her face.\u201d to change the expression class label to be text input. Since the expressions shown in the videos of MUG are often not obvious, we add \u201cslight\u201d in the text input to avoid large motion. UCF101 action recognition dataset [56] contains 13,320 videos from 101 human action classes. We include this dataset to measure performance under complicated motion and complex, changing backgrounds. To simplify the experiments, we select 10 action classes and the first 10 subjects within each class. We use text prompt templates such as \u201cA person is performing {label}.\u201d to change the class label to text input. In addition to the above two datasets, we create an OPEN dataset to assess the model\u2019s performance in opendomain TI2V generation. We first utilize ChatGPT [43] to generate 10 text prompts. Subsequently, we employ Stable \f\u201cA woman with the expression of slight anger on her face.\u201d (MUG) Ground Truth TI2V-Zero w/o Resample (Ours) DynamiCrafter \u201cA person is kayaking.\u201d (UCF101) \u201cA romantic gondola ride through the canals of Venice at sunset.\u201d (OPEN) TI2V-Zero w/ Resample (Ours) Figure 5. Qualitative comparison among different methods on multiple datasets for TI2V generation. Columns in each block display the 1st, 6th, 11th, and 16th frames of the output videos, respectively. There are 16 frames with a resolution of 256 \u00d7 256 for each video. The given image x0 is highlighted with the red box and the text prompt y is shown under each block. Model MUG UCF101 FVD\u2193 sFVD\u2193 tFVD\u2193 FVD\u2193 tFVD\u2193 DynamiCrafter [67] 1094.72 1359.86\u00b1257.73 1223.89\u00b1105.94 589.59 1540.02\u00b1199.59 TI2V-Zero w/o Resample (Ours) 339.89 443.97\u00b1139.10 405.22\u00b161.58 493.19 1319.77\u00b1283.87 TI2V-Zero w/ Resample (Ours) 180.09 267.17\u00b174.72 252.77\u00b139.02 477.19 1306.75\u00b1271.82 Table 3. Quantitative comparison among different methods on multiple datasets for TI2V generation. Diffusion 1.5 [47] to synthesize 100 images from each text prompt, generating a total of 1,000 starting images and 10 text prompts for evaluating TI2V models. Data Preprocessing. We resize all the videos/images to 256 \u00d7 256 resolution. For UCF101, since most of the video frames are not square, we crop the central part of the frames. To obtain ground truth videos for computing metrics, we uniformly sample 16 frames from each video in the datasets to generate the video clips with a fixed length. Metrics. Following prior work [21, 22, 25], we assess the visual quality, temporal coherence, and sample diversity of generated videos using Fr\u00b4 echet Video Distance (FVD) [57]. Similar to Fr\u00b4 echet Inception Distance (FID) [18], which is used for image quality evaluation, FVD utilizes a video classification network I3D [6] pretrained on Kinetics400 dataset [26] to extract feature representation of real and synthesized videos. Then it calculates the Fr\u00b4 echet distance between the distributions of the real and synthesized video features. To measure how well a generated video aligns with the text prompt y (condition accuracy) and the given image x0 (subject relevance), following [39], we design two variants of FVD, namely text-conditioned FVD (tFVD) and subject-conditioned FVD (sFVD). tFVD and sFVD compare the distance between real and synthesized video feature distributions under the same text y or the same subject image x0, respectively. We first compute tFVD and sFVD for each condition y and image x0, then report their mean and variance as final results. In our experiments, we generate 1,000 videos for all the models to estimate the feature distributions. We compute both tFVD and sFVD on the MUG dataset, but for UCF101, we only consider tFVD since it doesn\u2019t contain videos of different actions for the same subject. For the OPEN dataset, we only present qualitative results due to the lack of ground truth videos. Unless otherwise specified, all the generated videos are 16 frames (i.e., M = 15) with resolution 256 \u00d7 256. 4.2. Implementation Details Model Implementation. We take the ModelScopeT2V 1.4.2 [60] as basis and implement our modifications. For text-conditioned generation, we employ classifier-free guidance with g = 9.0 in Eq. (5). Determined by our preliminary experiments, we choose 10-step DDIM and 4-step resampling as the default setting for MUG and OPEN, and 50-step DDIM and 2-step resampling for UCF101. Implementation of SOTA Model. We compare our TI2V-Zero with a state-of-the-art (SOTA) model DynamiCrafter, a recent open-domain TI2V framework [67]. DynamiCrafter is based on a large-scale pretrained T2V foundation model VideoCrafter1 [16]. It introduces a learnable projection network to enable image-conditioned generation and then fine-tunes the entire framework. We implement DynamiCrafter using their provided code with their default settings. For a fair comparison, all the generated videos are \f! \ud835\udc65)* ! \ud835\udc65+, ! \ud835\udc65*+ ! \ud835\udc65-. ! \ud835\udc65,* ! \ud835\udc65/, ! \ud835\udc65))+ ! \ud835\udc65)+. \ud835\udc650 \u201cA mesmerizing display of the northern lights in the Arctic.\u201d ! \ud835\udc6510 Figure 6. Example of long video generation using our TI2V-Zero on the OPEN dataset. The given image x0 is highlighted with a red box, and the text prompt y is shown under the set of frames. There are a total of 128 video frames (M = 127), and the synthesized results for every 14 frames are presented. centrally-cropped and resized to 256 \u00d7 256. 4.3. Result Analysis Ablation Study. We conduct ablation study of different sampling strategies on MUG. As shown in Tab. 1 and Fig. 4, compared with generating using randomly sampled Gaussian noise, initializing the input noise with DDPM inversion is important for generating temporally continuous videos, improving all of the metrics dramatically. For MUG, increasing the DDIM sampling steps from 10 to 50 does not enhance the video quality but requires more inference time. Thus, we choose 10-step DDIM as the default setting on MUG. As shown in Fig. 4 and Tab. 1, adding resampling helps preserve identity details (e.g., hairstyle and facial appearance), resulting in lower FVD scores. Increasing resampling steps from 2 to 4 further improves FVD scores. Effect of Real/Synthesized Starting Frames. We also explore the effect of video generation starting with real or synthesized frames on UCF101. We initially use the first frame of the real videos to generate videos with our TI2V-Zero, termed TI2V-Zero-Real. Additionally, we utilize the backbone model ModelScopeT2V [60] to generate synthetic videos using the text inputs of UCF101. We then employ TI2V-Zero to create videos from the first frame of the generated fake videos, denoted as TI2V-Zero-Fake. As shown in Tab. 2, [TI2V-Zero-Fake vs. ModelScopeT2V] can achieve better FVD scores than [TI2V-Zero-Real vs. Real Videos]. The reason may be that frames generated by ModelScopeT2V can be considered as in-distribution data since TI2V-Zero is built upon it. We also compare the output video distribution of TI2V-Zero-Fake and ModelScopeT2V with real videos in Tab. 2. Though starting from the same synthesized frames, TI2V-Zero-Fake can generate more realistic videos than the backbone model. Comparison with SOTA Model. We compare our proposed TI2V-Zero with DynamiCrafter [67] in Tab. 3 and Fig. 5. From Fig. 5, one can find that DynamiCrafter struggles to preserve details from the given image, and the motion of its generated videos is also less diverse. Note that DynamiCrafter requires additional fine-tuning to enable TI2V generation. In contrast, without using any fine-tuning or introducing external modules, our proposed TI2V-Zero can precisely start with the given image and output more visually-pleasing results, thus achieving much better FVD scores on both MUG and UCF101 datasets in Tab. 3. The comparison between our TI2V-Zero models with and without using resampling in Fig. 5 and Tab. 3 also demonstrates the effectiveness of using resampling, which can help maintain identity and background details. Extension to Other Applications. TI2V-Zero can also be extended to other tasks as long as we can construct s0 with K images at the beginning. These images can be obtained either from ground truth videos or by applying the repeating operation. Then we can slide s0 when generating the subsequent frames. We have applied TI2V-Zero in video infilling (see the last row in Fig. 3), video prediction (see Supplementary Materials), and long video generation (see Fig. 6). As shown in Fig. 6, when generating a 128-frame video on the OPEN dataset, our method can preserve the mountain shape in the background, even at the 71st frame (frame \u02c6 x70). The generated video examples and additional experimental results are in our Supplementary Materials. 5. Conclusion In this paper, we propose a zero-shot text-conditioned image-to-video framework, TI2V-Zero, to generate videos by modulating the sampling process of a pretrained video diffusion model without any optimization or fine-tuning. Comprehensive experiments show that TI2V-Zero can achieve promising performance on multiple datasets. While showing impressive potential, our proposed TI2VZero still has some limitations. First, as TI2V-Zero relies on a pretrained T2V diffusion model, the generation quality of TI2V-Zero is constrained by the capabilities and limitations of the pretrained T2V model. We plan to extend our method to more powerful video diffusion foundation models in the future. Second, our method sometimes generates videos that are blurry or contain flickering artifacts. One possible solution is to apply post-processing methods such as blind video deflickering [30] or image/video deblurring [51] to enhance the quality of final output videos or the newly synthesized frame in each generation. Finally, compared with GAN and standard video diffusion models, our approach is considerably slower because it requires running the entire diffusion process for each frame generation. We will investigate some faster sampling methods [29, 32] to reduce generation time.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.06674v2", |
| "title": "VoiceShop: A Unified Speech-to-Speech Framework for Identity-Preserving Zero-Shot Voice Editing", |
| "abstract": "We present VoiceShop, a novel speech-to-speech framework that can modify\nmultiple attributes of speech, such as age, gender, accent, and speech style,\nin a single forward pass while preserving the input speaker's timbre. Previous\nworks have been constrained to specialized models that can only edit these\nattributes individually and suffer from the following pitfalls: the magnitude\nof the conversion effect is weak, there is no zero-shot capability for\nout-of-distribution speakers, or the synthesized outputs exhibit undesirable\ntimbre leakage. Our work proposes solutions for each of these issues in a\nsimple modular framework based on a conditional diffusion backbone model with\noptional normalizing flow-based and sequence-to-sequence speaker\nattribute-editing modules, whose components can be combined or removed during\ninference to meet a wide array of tasks without additional model finetuning.\nAudio samples are available at \\url{https://voiceshopai.github.io}.", |
| "authors": "Philip Anastassiou, Zhenyu Tang, Kainan Peng, Dongya Jia, Jiaxin Li, Ming Tu, Yuping Wang, Yuxuan Wang, Mingbo Ma", |
| "published": "2024-04-10", |
| "updated": "2024-04-11", |
| "primary_cat": "cs.SD", |
| "cats": [ |
| "cs.SD", |
| "cs.AI", |
| "eess.AS" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "VoiceShop: A Unified Speech-to-Speech Framework for Identity-Preserving Zero-Shot Voice Editing", |
| "main_content": "Introduction Research efforts in deep generative modeling have historically been restricted to training specialized models for each task within a given domain, limiting their versatility and requiring expertise in the idiosyncrasies of each sub-specialty [Baevski et al., 2022, Kaiser et al., 2017, Jaegle et al., 2021]. Aided by the steady progress of large-scale pre-training, expanding datasets, and self-supervised learning paradigms, attention has turned to developing task-agnostic \u201cfoundation models\u201d built on learning universal representations of data, often combining modalities [Li et al., 2023]. The primary advantage of such models is the ability to perform several tasks within a single, unified framework, usually enabled by finetuning on a collection of downstream tasks or the incorporation of multitask learning objectives. In the speech, audio, and music domains, this current has materialized in a series of recent works, such as VALL-E [Wang et al., 2023a], Voicebox [Le et al., 2023], UniAudio [Yang et al., 2023a], VioLA [Wang et al., 2023b], AudioLDM2 [Liu et al., 2023b], SpeechT5 [Ao et al., 2022], WavLM [Chen et al., 2022], and MuLan [Huang et al., 2022], all of which aim to support multiple audio-related use cases simultaneously, signaling a paradigm shift in the study and use of deep learning systems at large. Even as the quality of proposed speech synthesis models improves following advances in diffusion [Kong et al., 2020b] and neural codec modeling [D\u00e9fossez et al., 2022, Zeghidour et al., 2021], disentangled representation learning, whereby attributes such as a speaker\u2019s timbre, prosody, age, gender, and accent are extracted and separated from speech signals, remains an open problem [Peyser et al., 2022, Polyak et al., 2021, Wang et al., 2023c]. Disentangling these characteristics is an essential component of any generative model that aims to allow users to flexibly modify certain attributes of their speech while keeping others constant. Due to the non-trivial nature of this task, several works *Equal contribution Preprint. arXiv:2404.06674v2 [cs.SD] 11 Apr 2024 \ffocus on editing a single attribute at a time and frequently require that synthesized speech map to a set of in-domain target speakers seen during training, leaving the development of systems capable of editing multiple attributes at once under-explored. Therefore, the prevailing limitation of existing models is that such fine-grained control of generated speech in a user\u2019s own voice is limited. To this end, we present VoiceShop, a novel speech-to-speech foundation model capable of a wide assortment of zero-shot synthesis tasks. Within a unified framework, VoiceShop is capable of monolingual and cross-lingual voice conversion, identity-preserving many-to-many accent and speech style conversion, and age and gender editing, where all tasks may be performed in a zero-shot setting on arbitrary out-of-domain speakers not seen during training. Furthermore, our framework enables users to perform any combination of these tasks simultaneously in a single forward pass while preserving the original speaker\u2019s unedited attributes, i.e., one may edit the age, gender, accent, and speech style of input speech at once. Thus, VoiceShop\u2019s capabilities go beyond the scope of traditional voice conversion (VC), which aims to synthesize speech that retains the linguistic information of an utterance by a source speaker, while applying the timbre of a target speaker. Instead, our work addresses voice editing (VE), which we define as the modification of disentangled speech attributes or the creation of new voices solely based on the source speaker without requiring a target speaker. This behavior is achieved through the use of a diffusion backbone model, which accepts global speaker embeddings and time-varying content features as conditioning signals, enabling robust zero-shot VC. We additionally train two separate task-specific editing modules based on a continuous normalizing flow model [Rezende and Mohamed, 2016] that operates on the global speaker embedding to achieve age and gender editing and a sequence-to-sequence model [Sutskever et al., 2014] that operates on the local content features to achieve accent and speech style conversion. Together, these components make up VoiceShop\u2019s modular framework and enable its flexible VE capabilities without finetuning. Our main contributions are as follows: \u2022 Multi-attribute speech editing via unified scalable framework: VoiceShop supports both conventional VC and VE in a single speech-to-speech framework. Editing modules are trained separately from the backbone synthesis module and may be used in a modular plug-and-play fashion, making them scalable to novel editing tasks and datasets without additional finetuning. \u2022 Zero-shot generalization: Our model achieves strong zero-shot VC and VE performance on arbitrary unseen speakers, matching or outperforming existing state-of-the-art (SOTA) baselines specialized for different sub-tasks. \u2022 Disentangled attribute control: We achieve strong disentanglement of multiple speaker attributes, which in the VE case allows simultaneous identity-preserving transformations of users\u2019 voices without modifying their timbre. 2 Related Work Zero-Shot Voice Conversion. While fully supervised VC is considered a canonical problem in the speech domain, AutoVC [Qian et al., 2019] is among the earliest works to successfully extend VC to the zero-shot setting, demonstrating that promising results could be achieved using a simple autoencoder architecture and carefully designed information bottleneck. Since this work, a variety of methods has been proposed to improve the generalization abilities of zero-shot any-to-any VC systems, such as end-to-end normalizing flow models that maximize the variational lower bound of the marginal log-likelihood of data [Casanova et al., 2022, Kim et al., 2021, Li et al., 2022a, Pankov et al., 2023], unsupervised mutual information-based strategies for speaker-content disentanglement [Wang et al., 2021], and diffusion-based approaches [Choi et al., 2023, Kim et al., 2023, Popov et al., 2021]. While many of these proposals achieve high-quality outputs, the cross-lingual case [Casanova et al., 2022], whereby source and target utterances are spoken in different languages, is less explored. In this work, we show that it is possible to achieve strong zero-shot VC performance using a relatively simple diffusion modeling approach while supporting monolingual and cross-lingual conversion. Accent and Speech Style Conversion. Style transfer refers to the task of disentangling stylistic and semantic information of samples from a given domain in order to synthesize new samples that 2 \fFigure 1: The architecture of VoiceShop: The overall pipeline follows an analysis-synthesis approach. During the analysis step, the ECAPA-TDNN speaker encoder and pre-trained ASR module respectively decompose input speech into speaker identity represented by a global speaker embedding (SE) and local content embeddings represented by a sequence of bottleneck (BN) features. During the synthesis step, both SE and BN features condition the diffusion backbone model to reconstruct mel-spectrograms of input speech, followed by a vocoder to acquire time-domain waveforms. Voice editing: Editing is an optional step in between the analysis and synthesis steps reserved for inference. As highlighted in the two dashed boxes, an attribute-conditional flow module is used to globally edit the speaker embedding (e.g., change age and gender), whereas a BN2BN module is used to edit content embeddings (e.g., change prosody or speech style). These voice editing modules are trained separately from the generative backbone module and are used in a modular plug-and-play manner. impose the style of a reference sample, while retaining the content of another [Gatys et al., 2015]. Accent conversion (AC) or speech style conversion refines this definition to focus on converting the accent or \u201cspeech style\u201d of an utterance, i.e., modifying a speaker\u2019s pronunciation or prosody, while preserving their spoken content and timbre. Several works have been proposed to achieve accent or speech style conversion, but are often limited by any combination of the following factors. The models may lack zero-shot capabilities, requiring that input timbres are mapped to a set of in-domain target timbres seen during training, altering the identity of the source speaker [Zhang et al., 2022b,a], or rely on adversarial training strategies, such as domain adaptation via gradient reversal introduced by Ganin et al. [2016], or empirically designed data augmentation methods to encourage disentanglement [Badlani et al., 2023, Chan et al., 2022, Choi et al., 2022, Jia et al., 2023, Li et al., 2022b,c, Zhang et al., 2023a]. Many frameworks are designed for text-to-speech (TTS), but do not address the speech-to-speech (STS) case [Guan et al., 2023, Tinchev et al., 2023, Zhou et al., 2023], require ground truth alignments between input and output features acquired through third-party forced alignment tools [Ezzerg et al., 2022, Karlapati et al., 2022], lack explicit control of specific attributes [Wang et al., 2018], or do not support many-to-many conversion, i.e., one must train separate models for each desired target accent, rather than encompassing all conversion paths in a single model [Zhao et al., 2018, 2019]. To our knowledge, no works have been proposed for cross-lingual AC, whereby accents extracted from speech of one language are transferred to speech of another language. Age and Gender Editing. Within the context of VE, NANSY++ [Choi et al., 2022] is the most relevant work to support age and gender editing in a zero-shot manner, achieved through a unified framework for synthesizing and manipulating speech signals from analysis features. In their voice designing pipeline NANSY-VOD, the authors employ three normalizing flow networks in a cascaded manner, where the outputs of the earlier models are passed as conditions to subsequent models, to predict F0 statistics, a global timbre embedding, and a fixed number of timbre tokens used to edit the age and gender of output speech. We show that VoiceShop achieves age and gender editing capabilities using a simpler design consisting of one normalizing flow model that allows simultaneous attribute editing. 3 \f3 VoiceShop 3.1 Method Overview The architecture of VoiceShop is depicted in Figure 1. The core modules are trained separately and then frozen in subsequent stages, as follows: 1. Train an automatic speech or phoneme recognition (ASR/APR) model to extract intermediate feature maps as time-varying content representations of speech. 2. Jointly train a speaker encoder and diffusion backbone model (optionally including a neural vocoder) conditioned on the time-varying content features and global utterance-level embeddings produced by the speaker encoder to predict mel-spectrograms of speech. 3. Train individual attribute editing modules to be used during inference for individual or combined multi-attribute editing of the source speaker\u2019s voice. The first two stages listed above consist of large-scale pre-training of the diffusion backbone model to achieve robust zero-shot VC ability. The third stage focuses on separately preparing lightweight modules that respectively operate on the diffusion model\u2019s two conditioning signals to modify one or more speech attributes such as age, gender, accent, or speaking style during inference. By tackling VE in this modular approach, we remove the need for transfer learning via finetuning on downstream tasks later on. 3.2 Large-Scale Pre-Training We describe the technical details of three pre-trained models vital to our proposed framework: the conformer-based ASR model, the conditional diffusion backbone model, and the vocoder. During the large-scale pre-training stage of these models, it is crucial to respectively train on large amounts of diverse speech data from several speakers in various recording conditions. Doing so ensures that the models learn sufficiently generalized distributions, which is necessary for zero-shot inference, while indirectly improving the performance of the attribute-editing modules, whose ground truth targets are extracted from these pre-trained models. To fully utilize all available data, which often does not include textual transcriptions, we adopt fully self-supervised training schemes for all aforementioned models, with the exception of the ASR model, whose details we provide in the following sections. 3.2.1 Conformer-based ASR Model In principle, any parametric model that produces time-varying content representations, whether extracted from models optimized using traditional ASR criteria [Baevski et al., 2020, Chan et al., 2021, Majumdar et al., 2021] or more recent universal speech frameworks based on vector quantization methods to enable language modeling objectives [Wang et al., 2023b, Yang et al., 2023a,c, Zhang et al., 2023c], may be used in our framework. In order to enable cross-lingual synthesis capabilities, we train our own monolingual and bilingual conformer-based ASR models [Gulati et al., 2020] from scratch, referred to as ASR-EN and ASR-EN-CN respectively, such that the former only transcribes English speech and the latter transcribes both English and Mandarin speech, based on the open-source ESPnet1 [Watanabe et al., 2018] library. For implementation details, please refer to \u00a7A.1. 3.2.2 Conditional Diffusion Backbone Model for Zero-Shot Voice Conversion Denoising diffusion probabilistic models (DDPMs) [Ho et al., 2020] are a class of latent variable models that employ two Markovian chains, referred to as the \u201cforward\u201d and \u201creverse\u201d processes. In the forward process q(x1:T |x0), data samples x0 \u223cq(x0) are iteratively injected with Gaussian noise for time steps t \u2208[1, T] according to a deterministic variance schedule until corrupted. In the learnable reverse process p\u03b8(x0:T ), a neural network is tasked with approximating q(xt\u22121|xt), which is intractable, by predicting and removing noise added in the forward process for each time step t, until the original sample is retrieved, thereby learning a mapping between the original data distribution and a tractable prior distribution, such as a standard isotropic Gaussian distribution [Yang et al., 2024]. 1https://github.com/espnet/espnet 4 \fWe propose a conditional diffusion model to predict mel-spectrogram representations of speech, serving as the backbone of our unified framework. Specifically, we consider a reverse process p\u03b8(x0:T |ES, EC), where x0 \u2208RF \u00d7L denotes a mel-spectrogram extracted from raw audio, such that F is the number of frequency bands and L refers to the duration, S(x0) = ES \u2208RDS denotes an utterance-level global speaker embedding (i.e., lacking temporal information) produced by speaker encoder S, and C(x0) = EC \u2208RDC\u00d7 L 4 denotes a time-varying content embedding produced by a pre-trained ASR or APR model C. We train the speaker encoder jointly with the diffusion model from scratch, adopting the same configuration as the ECAPA-TDNN speaker verification model [Desplanques et al., 2020] to extract timbre information as a 512-dimensional vector. As with Mo\u00fbsai [Schneider et al., 2023], we employ a one-dimensional U-Net [Ronneberger et al., 2015] as the architecture of the diffusion model and modify their open-source implementation based on the A-UNet toolkit2. We follow their design for each U-Net block with some modifications, the details of which are discussed in \u00a7A.2. We also adopt the velocity-based formulation of the diffusion training objective and angular reparametrization of DDIM [Song et al., 2022] proposed by Salimans and Ho [2022], defining our forward process as: q(xt|x0) = N(xt; \u03b1tx0, \u03b2t\u03f5) (1) where \u03b1t := cos \u0000 t\u03c0 2 \u0001 , \u03b2t := sin \u0000 t\u03c0 2 \u0001 , xt = \u03b1tx0 \u2212\u03b2t\u03f5 denotes noisy data at time step t \u223cU[0, 1], and \u03f5 \u223cN(0, I). Under this setup, the model minimizes the mean squared error (MSE) between ground truth and predicted velocity terms, as follows: L(\u03b8) := Et\u223cU[0,1] \u0002 \u2225vt \u2212f\u03b8(xt; t, ES, EC)\u22252 2 \u0003 (2) vt = \u03b1t\u03f5 \u2212\u03b2tx0 (3) where \u03b8 denotes the parameters of the speaker encoder and diffusion model. During inference, we generate samples from noise with a DDIM sampler by applying the following over uniformly sampled time steps t \u2208[0, 1] for T steps: \u02c6 vt = f\u03b8(xt; t, ES, EC) (4) \u02c6 x0 = \u03b1txt \u2212\u03b2t\u02c6 vt (5) \u02c6 \u03f5t = \u03b2txt + \u03b1t\u02c6 vt (6) \u02c6 xt\u22121 = \u03b1t\u22121\u02c6 x0 + \u03b2t\u22121\u02c6 \u03f5t (7) When deciding which content features to use for conditioning the diffusion backbone model, we consider observations made by Yang et al. [2023b], who find that the selection of which layer to extract content representations from pre-trained ASR or APR models has a measurable impact on the magnitude of various information sources encoded in the latent sequence. Our work validates this finding, and we note that the activation maps of shallower layers of such models, such as the 10th layer, contain far greater amounts of prosodic and accent information compared to those of deeper layers, such as the 18th layer. Intuitively, this phenomenon may be explained by a layer\u2019s proximity to the final loss calculation, such that deeper layers are in some sense \u201ccloser\u201d to pure textual transcriptions of input speech, while shallower layers still contain significant amounts of timbre and prosody leakage desirable for accent and speech style conversion. For this reason, we train four versions of the diffusion backbone model, two of which accept the outputs of the 10th and 18th layers of the monolingual ASR-EN model, respectively denoted VS-ENL10 and VS-EN-L18, and an additional two which accept those of the 10th and 18th layers of the bilingual ASR-EN-CN model, respectively denoted VS-EN-CN-L10 and VS-EN-CN-L18. The training datasets are listed in Table 1 and all backbone model configurations are summarized in Table 2. We train all diffusion backbone models using the AdamW optimizer [Loshchilov and Hutter, 2019] on 4 A100 GPUs with a learning rate of 1 \u00d7 10\u22124 and batch size of 88 samples for 250K iterations. We set a learning rate decay of 0.85 applied every 20 epochs. To leverage large amounts of speech data, we train with unlabeled audio in a self-supervised manner. After convergence, the diffusion backbone model becomes capable of robust zero-shot VC. 3.2.3 Neural Mel-Spectrogram Vocoder As the final step of our inference pipeline, we convert the predicted mel-spectrogram of the diffusion model from its time-frequency representation to a time-domain audio signal using a neural vocoder. 2https://github.com/archinetai/audio-diffusion-pytorch 5 \fTable 1: Datasets used to train diffusion backbone models. Monolingual models only use English data, whereas bilingual models use all listed data. Corpus Language Hours Common Voice 13.0 English 3,209 LibriTTS English 585 L2-ARCTIC English 20 Proprietary ASR/TTS corpus English 1,400 AISHELL-3 Mandarin 85 Proprietary TTS corpus Mandarin 84 Table 2: Training configurations of each diffusion backbone model. Layer numbers (10 and 18) are contained in model names. Backbone Model ASR Model Bilingual VS-EN-L10 ASR-EN \u2717 VS-EN-L18 ASR-EN \u2717 VS-EN-CN-L10 ASR-EN-CN \u2713 VS-EN-CN-L18 ASR-EN-CN \u2713 Like the ASR model, while any SOTA vocoder may be used for this task, we choose to train our own model based on HiFi-GAN [Kong et al., 2020a] from scratch for improved robustness and uncompromised audio quality when scaling up. For implementation details, please refer to \u00a7A.3. 3.3 Task-Specific Voice Editing Modules Rather than finetuning the backbone model obtained in \u00a73.2.2, we develop individual voice attribute editing modules as flexible, modular plug-ins to the generative pipeline. 3.3.1 Attribute-Conditional Normalizing Flow for Age and Gender Editing We observe that the speaker encoder jointly trained with our diffusion backbone model achieves strong speech attribute disentanglement, indicating that many attributes like age and gender are encoded into the global 512-dimensional speaker embedding vector. Therefore, the manipulation of these attributes can be seen as re-sampling from the learned latent space of speaker embeddings. To achieve fully controllable generation and editing of specific speaker attributes while leaving other attributes unaffected, we take inspiration from StyleFlow [Abdal et al., 2021] in the image editing domain and employ a similar attribute-conditional continuous normalizing flow (CNF) that operates on this latent space. Continuous normalizing flows utilize a neural ordinary differential equation (ODE) formulation [Chen et al., 2018] to model the bidirectional mapping of two distributions: dz dt = \u03d5\u03b8(z(t), t) (8) z (t1) = z (t0) + Z t1 t0 \u03d5\u03b8(z(t), t)dt (9) where t is the (virtual) time variable, z(t) is the variable of a given distribution, and \u03d5\u03b8 is an arbitrary neural network that generates outputs that have the same dimensionality as the inputs, parameterized by \u03b8. We denote our target distribution of speaker embeddings as z (t1) = w \u2208R512 in W space, and variables from a prior distribution as z (t0) \u2208R512 in Z space, which we conveniently set to a zero-mean multi-dimensional Gaussian distribution with identity variance N(0, I). By applying the change of variable rule, we can formulate the change in the log density as: log p (w) = log p (z (t0)) \u2212 Z t1 t0 Tr \u0012 \u2202\u03d5 \u2202z(t) \u0013 dt (10) The training objective of the CNF would be to maximize the likelihood of the data w. In our attribute conditional scenario, given the speaker attributes vector a (e.g., for age and gender, a \u2208R2) that is associated with sample w, we update the ODE network to be \u03d5\u03b8(z(t), a, t), which is conditioned on both t and a. Consequently, our new objective becomes max\u03b8 P w,a log p(w|a, t). During training, we also employ trajectory polynomial regularization [Huang and Yeh, 2021], which we find stabilizes the training. Note that there is no attribute editing during the training process. A well-trained CNF model behaves as a bijective mapping between the data distribution and prior distribution with the help of an ODE solver. In our experiments, we use the dopri5 ODE solver [Hairer et al., 2008] and adopt the same CNF implementation from StyleFlow3. We only use one CNF block with hidden dimension 3https://github.com/RameenAbdal/StyleFlow 6 \fFigure 2: Attribute conditional flow editing module: Starting with a speaker\u2019s voice sample, we use a pre-trained attribute predictor to obtain age and gender labels, which we denote as the original attribute a, and use the speaker encoder jointly trained with the diffusion backbone to extract the speaker embedding w. The three steps of editing during inference proceed as follows: 1. An ODE solver utilizes the pre-trained CNF model which is conditioned on a and t to reverse integrate w from t1 to t0 into z0, which is the encoded latent in the prior space. 2. Modify any or all attributes of the original speaker to obtain the new attribute vector a\u2032. 3. Use the ODE solver again for forward integration from t0 to t1 using the CNF model with z0 conditioned on a\u2032. The output is a new speaker embedding w\u2032, which embeds the edited attributes. When using w\u2032 with our diffusion backbone model, the generated voice should retain the unedited attributes in the original input voice (i.e., editing gender should not affect age and vice versa). 512 rather than stacking multiple blocks, which leads to lower validation loss. Our lightweight CNF model has 0.5M learnable parameters and is trained using Adam [Kingma and Ba, 2014] on a single V100 GPU for 24H. The attribute editing procedure at inference time is depicted in Figure 2. Attributes Dataset. The attribute labels for training our attribute-conditional CNF are obtained from the CommonVoice 13.0 dataset [Ardila et al., 2019], which contains multiple audio recordings of more than 51K human-validated anonymous speakers, totaling 2,429 hours. Among these validated speakers, about 20K speakers provide age (from twenties to nineties) and gender labels (male or female). To fully utilize the dataset, we first train a naive ECAPA-TDNN-based age and gender prediction model by appending a projection layer that predicts age as a numeric value and gender as a logit to its original output layer. We combine mean absolute error (MAE) and cross-entropy (CE) losses for age and gender prediction respectively. This prediction model serves two purposes: 1. We use it to weakly label the remainder of the CommonVoice English dataset for CNF training. 2. For gender labels, we use the predicted logits instead of binary labels, which facilitates continuous gender editing similar to age labels. Without extensive parameter tuning, we obtain a model with 4.39 mean absolute age error and 99.1% gender accuracy on a holdout test consisting of 10% of the labeled speakers, after training on 8 V100 GPUs for 48 hours using the AdamW optimizer [Loshchilov and Hutter, 2019] with a learning rate of 1 \u00d7 10\u22123 and weight decay of 1 \u00d7 10\u22126. We then use the speaker encoder to extract utterance-level speaker embeddings and use the predicted age and gender labels for these 51K speakers to train CNF. 3.3.2 Bottleneck-to-Bottleneck (BN2BN) Modeling for Many-to-Many Accent and Speech Style Conversion We propose a bottleneck-to-bottleneck (BN2BN) model capable of many-to-many accent and speech style conversion by mapping the time-varying \u201cbottleneck\u201d content features of utterances from an arbitrary number of source accents to those of an arbitrary number of target accents in a single model using a multi-decoder architecture based on encoder-decoder sequence-to-sequence modeling with cross attention, effectively reducing the accent conversion task to a machine translation problem. 7 \fFigure 3: Bottleneck-to-bottleneck (BN2BN) modeling: Our BN2BN design maps the time-varying content features of utterances from an arbitrary number of source accents to those of an arbitrary number of target accents in a single model using a multi-decoder architecture. Beyond typical accent conversion, the BN2BN model is also capable of generalized speech style transfer. There are no specific requirements for what constitutes a \u201cspeech style,\u201d which may be as broad as emotional speech or the speaking styles of iconic personalities from popular culture. To train in this fashion, we propose a simple method for augmenting non-parallel speech datasets into parallel multi-speaker, multi-accent \u201ctimbre-matched\u201d datasets by leveraging TTS and VC modeling, only requiring a text corpus at minimum, which we describe in \u00a7A.4. In our experiments, we use publicly available TTS models provided by the Microsoft Azure AI platform4 to synthesize speech in a variety of accents in English and Mandarin for training. Our approach is rooted in the observation that these latent sequences not only encode the semantics of a speech signal (i.e., what is said), but also pronunciation and prosody (i.e., how it is said). During training, each batch X \u2208RB\u00d7T \u00d7D1 consists of the local content features of utterances, such that the source accent, content, and speaker of inputs are sampled uniformly at random. The utterances are processed by a universal encoder E that learns accent-agnostic latent representations Z = E(X) \u2208RB\u00d7T \u00d7D2, which are passed as input to autoregressive domain-specific decoders Dj and post nets Pj for each supported target accent j \u2208[1, M] to reconstruct the content features in the respective target accents regardless of the source accent, i.e., Pj(Dj(Z)) = \u02c6 Xj \u2208RB\u00d7T \u00d7D1, as depicted in Figure 3. We use the 10th layer activation maps from our ASR-EN and ASR-EN-CN models to achieve AC. We optionally train a final conformer-based predictor module which generates the corresponding 18th layer activation maps conditioned on predicted 10th layer representations to enable combined multi-attribute conversion alongside the flow-based age and gender editing module, which operates most successfully on speaker embeddings produced by our 18th layer-based diffusion backbone models, VS-EN-L18 and VS-EN-CN-L18. We find that this is due to the abundant timbre and prosody leakage of shallow content representations, which proves to be beneficial for AC, but in turn limits the editing capabilities of the speaker embedding. In the absence of such information when training on sparse content features, the speaker embedding of our backbone model necessarily assumes more control of various speech attributes, which is preferable for age and gender editing. We adopt a conformer encoder and LSTM decoders with the same configuration proposed in Tacotron 2 [Shen et al., 2018], using additive energy-based Dynamic Convolution Attention (DCA) [Battenberg et al., 2020] as the cross-attention mechanism between the encoder and decoders. We jointly train \u201cstop\u201d gate projection layers for each decoder using binary CE, which produces a scalar between 0 and 1 indicating when to stop generation (LGate). We apply three MAE reconstruction loss terms to 4https://azure.microsoft.com/en-us/products/ai-services/text-to-speech 8 \fthe output of the LSTM decoders (LPre L10), post nets (LPost L10), and universal predictor (LL18). The final loss is formulated as: LBN2BN = LPre L10 + LPost L10 + LL18 + LGate (11) We train each BN2BN model using the AdamW optimizer [Loshchilov and Hutter, 2019] on 8 A100 GPUs with a batch size of 32 samples and an initial learning rate of 1 \u00d7 10\u22124, to which we apply a decay rate of 0.85 every 100 epochs. The rate of convergence typically relies on the size of the dataset and varies between approximately 75 to 200 epochs. 4 Experiments and Analysis We demonstrate the versatile capabilities of VoiceShop on various synthesis-related tasks, specifying each model configuration used.5 For all generated samples, we use 5 time steps in the reverse diffusion process and replace Equation 6 with \u03f5t \u223cN(0, I) in our DDIM sampler, which we empirically find causes no perceptual difference in output quality. Evaluation Metrics. We evaluate the performance of each mentioned synthesis task using a variety of subjective and objective metrics, defined as follows. For subjective evaluation, we conduct Mean Opinion Score (MOS) and Comparative Mean Opinion Score (CMOS) studies with anonymous human participants who are tasked with judging the performance of VoiceShop on pre-defined metrics, such as perceived speaker similarity, conversion strength, and naturalness. For MOS studies, participants rank individual samples on a scale between 1 and 5, where 1 indicates poor performance and 5 indicates strong performance, whereas CMOS studies ask participants to consider pairs of samples labeled A and B and rank their preference on a scale between -3 to 3, where -3 indicates a strong preference for sample A, 3 indicates a strong preference for sample B, and 0 indicates no preference. To objectively evaluate the speaker similarity between ground truth input and synthesized output pairs at a large scale, we use the Automatic Speaker Verification (ASV) metric, whereby we compute the cosine similarity of fixed-length embeddings extracted from a pre-trained speaker verification model. Therefore, the ASV metric falls within the range of -1 and 1, where greater values indicate higher similarity within the learned latent space of the speaker verification model, which is assumed to effectively discriminate timbre. We follow VALL-E [Wang et al., 2023a] and Voicebox [Le et al., 2023] and adopt WavLM-TDNN6 [Chen et al., 2022] to extract speaker embeddings for this purpose. Additional task-specific metrics are defined in subsequent sections as needed. For tasks where no obvious third-party baselines exist for direct comparisons, such as style conversion and cross-lingual AC, we report scores of intentionally incorrect pairs (i.e., for ASV, we compute the similarity of non-matching speaker pairs). When placed alongside the scores of ground truth inputs, doing so establishes upper and lower bounds against which we can more meaningfully understand the values produced by VoiceShop\u2019s outputs. For all applicable metrics, we summarize results with 95% confidence intervals. 4.1 Zero-Shot Voice Conversion We demonstrate VoiceShop\u2019s performance on monolingual and cross-lingual zero-shot VC, which refers to the task of modifying the timbre of a spoken utterance while preserving its content for arbitrary speakers not seen during training. In the monolingual case, both the source content and target timbre are spoken in the same language, whereas in the cross-lingual case, they are from different languages (e.g., apply the timbre of a Mandarin utterance to English content). Since the global speaker embedding used in our diffusion backbone model collapses temporal information, we additionally find that conversion can be achieved for out-of-domain languages not seen by the diffusion backbone model during training, enabling anyone to speak fluent English and Mandarin in their own voice regardless of their native language. 5Audio samples are available at https://voiceshopai.github.io. 6https://github.com/microsoft/UniSpeech/tree/main/downstreams/speaker_ verification 9 \fSubjective MOS Evaluation. We evaluate our framework on zero-shot VC against YourTTS [Casanova et al., 2022] and DiffVC [Popov et al., 2021], two recent works regarded as SOTA in this task. Following the experimental setups of these works, we use the VCTK corpus [Veaux et al., 2016] as the test set. We identify 8 VCTK speakers (4 males and 4 females) that are held out by both works, while our model is not trained on any VCTK data. We select one speech sample for each speaker as their reference, then each VC model will convert the voice of each speaker to the voice of all other speakers, yielding 56 samples per model. As ground truth for all test subsets, we randomly select 7 more audios for each of the test speakers, creating another 56 samples. In preparing test samples, we directly curated samples from YourTTS\u2019s official demo page7, and generated the same set of samples using DiffVC\u2019s official implementation8 with their best performing Diff-LibriTTS-wodyn checkpoint for a stronger baseline. For the purpose of this study, we instead use our VS-EN-L18 backbone model to perform inference, as it is restricted to English speech. All test samples are downsampled to 16kHz to match the test condition of YourTTS, and then normalized for loudness. Table 3: Results of subjective MOS evaluation of zero-shot VC for speaker similarity (sMOS) and naturalness (nMOS). Model sMOS (\u2191) nMOS (\u2191) YourTTS 2.89\u00b10.07 3.04\u00b10.07 DiffVC 3.14\u00b10.07 3.42\u00b10.07 VoiceShop (Ours) 3.76\u00b10.07 3.56\u00b10.06 Ground Truth 4.24\u00b10.06 4.24\u00b10.05 We conduct a MOS study with third-party vendors, recruiting 20 native English speakers (12 males, 8 females, mean age 32.0 years with a standard deviation of 6.5 years) for the study, in which each voiceconverted sample or ground truth sample is presented together with the reference sample. We ask participants to rate the sample\u2019s speech naturalness independently, then rate the sample\u2019s speaker similarity with respect to the reference sample, disregarding its naturalness. We adopt a five-point Likert scale for both questions. We summarize the MOS study results in Table 3. In both dimensions, our VoiceShop model outperforms other baseline models by clear margins. 4.2 Zero-Shot Identity-Preserving Many-to-Many Accent Conversion We demonstrate VoiceShop\u2019s capabilities on monolingual and cross-lingual identity-preserving manyto-many AC in English and Mandarin, which refers to the task of modifying the accent of an utterance while preserving the original speaker\u2019s timbre for arbitrary speakers not seen during training. In the monolingual case, AC occurs within the same language, whereas in the cross-lingual case, we convert to accents of a different language in the absence of parallel data (e.g., apply a British accent to Mandarin speech only using British-accented English speech or apply a Sichuan accent to English speech only using Sichuan-accented Mandarin speech, as recordings of British-accented Mandarin speech or Sichuan-accented English speech are not available). We depict modifications to our BN2BN design used only for cross-lingual conversion in Figure 4, whereby we add a gradient reversal module to promote language-agnostic representations, setting \u03bb = \u22121. In both cases, AC is achieved in a many-to-many manner, i.e., a user can convert an arbitrary number of source accents to an arbitrary number of target accents in one BN2BN model. All samples are generated using our VS-EN-CN-L10 backbone model. To our knowledge, VoiceShop is the first framework that achieves cross-lingual AC. Subjective CMOS Evaluation. We evaluate our model against Jin et al. [2023], a recent work which also achieves zero-shot identity-preserving many-to-many accent conversion, curating 16kHz samples directly from their official demo page9. As our diffusion backbone model requires 24kHz input, we use the official pre-trained checkpoint of AudioSR-Speech10, a versatile audio super-resolution model, to predict the corresponding high-fidelity 48kHz speech, which we then downsample to 24kHz prior to performing inference with our framework. Since upsampling the 16kHz audio is lossy and cannot recover spectral information between 8 to 12kHz in the frequency domain, it is necessary to use a generative model to properly extend the bandwidth. After inference, all samples are downsampled to 16kHz for a fair comparison. 7https://edresson.github.io/YourTTS/ 8https://github.com/trinhtuanvubk/Diff-VC 9https://accent-conversion.github.io/ 10https://github.com/haoheliu/versatile_audio_super_resolution 10 \fFigure 4: Training configuration of cross-lingual AC using BN2BN modeling, featuring adversarial domain adaptation via gradient reversal to promote language-agnostic content representations in the learned latent space of the universal encoder. We use the same reference encoder proposed in Wang et al. [2018]. Table 4: Results of subjective CMOS evaluation of monolingual AC for accent strength (aCMOS) and speaker similarity (sCMOS). Model aCMOS (\u2191) sCMOS (\u2191) VoiceShop (Ours) vs. Jin et al. [2023] 0.89\u00b10.15 0.11\u00b10.13 We conduct two CMOS studies to respectively measure the accent strength (aCMOS) and speaker similarity (sCMOS) of our systems using 30 pairs of samples (specifically, we evaluate on their British-to-American, British-toIndian, and Indian-to-American subsets). Both studies recruit 20 anonymous native English speakers balanced across gender using the Prolific crowd-sourcing platform11, where participants are asked to rate pairs of samples on a scale from -3 to 3, where -3 indicates a strong preference for the baseline model, 3 indicates a strong preference for VoiceShop, and 0 indicates no preference. Each comparison is made against provided reference clips to standardize the comparisons between listeners, who may have different expectations of the target accents\u2019 characteristics. The results are summarized in Table 4. We find that participants prefer the accent strength of our system by a clear margin while maintaining relatively equal performance on speaker similarity. Objective Evaluation. We conduct additional objective evaluations to judge VoiceShop\u2019s monolingual and cross-lingual AC capabilities. Specifically, we train three accent classifiers consisting of an ECAPA-TDNN encoder [Desplanques et al., 2020] and multilayer perceptron on the same datasets used to train our BN2BN models for monolingual English AC, monolingual Mandarin AC, and cross-lingual AC. All classifiers are trained using AdamW [Loshchilov and Hutter, 2019] for 24H on 8 V100 GPUs with a batch size of 64, a learning rate of 1 \u00d7 10\u22124, and decay rate of 0.85 applied every 100 epochs, achieving validation accuracies of 99.3% in English, 99.8% in Mandarin, and 99.9% in the cross-lingual case on their respective holdout sets. For each conversion type, we synthesize input speech in a variety of accents using the Microsoft Azure TTS service and convert each source accent to each target accent using our models. We use the learned latent space of each classifier to investigate the accent similarity of samples generated by our system, first computing the centroids of embeddings extracted from the last layer of each classifier using reference samples for each target accent. We then extract embeddings of samples to calculate the cosine similarity against the averaged reference embeddings for three categories: a separate set 11https://www.prolific.com/ 11 \fTable 6: Results of monolingual and cross-lingual accent conversion objective evaluation, measuring conversion strength. For each target accent, we provide the averaged cosine similarity of embeddings extracted from accent classifiers. Conversion Type Ground Truth Model Output Non-Matching Accent English Monolingual 0.996\u00b10.001 0.798\u00b10.016 0.018\u00b10.004 Mandarin Monolingual 0.987\u00b10.003 0.915\u00b10.011 0.621\u00b10.018 Cross-Lingual 0.984\u00b10.002 0.828\u00b10.015 0.550\u00b10.018 of ground truth samples, the accent-converted outputs of our models, and non-matching accents to establish a performance lower bound. We report overall similarities in Table 6 and provide a detailed breakdown of accent-wise similarities in Table A5. Table 5: Results of monolingual and cross-lingual accent conversion objective evaluation, measuring speaker similarity. We denote the number of samples used to calculate each value in parentheses. Conversion Type ASV (\u2191) English Monolingual 0.508\u00b10.006 (2,000) Mandarin Monolingual 0.809\u00b10.003 (1,280) Cross-Lingual 0.625\u00b10.010 (1,280) Overall 0.625\u00b10.005 (4,560) Same Speaker 0.925\u00b10.001 (1,900) Non-Matching Speaker 0.175\u00b10.005 (3,420) To visualize these results, we generate t-SNE plots [van der Maaten and Hinton, 2008] of the predicted embeddings of input speech in each source accent and output speech in each target accent, as depicted in Figure 5. We find the accent classifiers effectively cluster input speech according to their source accents and further observe that output speech generally preserves these clusters after accent conversion regardless of the source accent. We evaluate the speaker similarity for each conversion type of our BN2BN method alongside additional reference values using the ASV metric. We find that the timbre of the original speaker is largely preserved in both monolingual and cross-lingual scenarios after accent conversion, as summarized in Table 5. 4.3 Zero-Shot Identity-Preserving Speech Style Conversion We evaluate VoiceShop\u2019s speech style conversion capabilities using three styles (\u201cSarcastic Youth,\u201d \u201cFormal British,\u201d and \u201cCartoon Character\u201d). In these examples, the target styles are acquired from inhouse datasets, whereby actors perform highly stylized speech, amounting to a total of approximately one hour of data per style. Conversion occurs in a many-to-one manner, such that English speakers of arbitrary source accents are converted to the target style while preserving their timbre in a zero-shot manner. We use our VS-EN-L10 backbone model to generate samples. Subjective MOS Evaluation. To measure the perceived conversion strength of the aforementioned speech styles, we conduct a MOS study consisting of 20 anonymous native English speakers balanced across gender using the Prolific crowd-sourcing platform. We use the Microsoft Azure TTS service to generate speech in 7 timbres and convert each to the target speaking styles using our BN2BN models. We ask participants to rate the perceived similarity of a sample\u2019s speaking style against a provided reference clip on a scale from 1 to 5, where 1 indicates low similarity and 5 indicates high similarity, for three categories: ground truth samples extracted from the training corpus, neutral speech inputs, and model outputs with converted speaking styles (21 samples each). We summarize results in Table 7 and find the outputs of our style conversion models perform comparably well to ground truth samples, as opposed to the neutral inputs, indicating that the target speaking styles are reliably transferred to unseen speakers. Objective Evaluation. We again use ASV to measure the speaker similarity of our style conversion models under the same configuration and provide results in Table 8. We note that a fair comparison should be made by calculating the ASV between the same speaker\u2019s stylized and non-stylized speech. However, such paired samples do not exist as the ground truth. Therefore, the ASV score for same 12 \f(a) Monolingual English accent conversion (b) Monolingual Mandarin accent conversion (c) Cross-lingual accent conversion Figure 5: Visualizing accent transfer: By using the latent space of accent classifiers, we observe that input speech is clustered by source accent and that accent-converted speech predicted by our BN2BN models largely preserve these structures according to their target accents. speaker and style in Table 8 does not expose the ASV drop due to the stylization of the same person\u2019s speech, which is reflected in the score for our model output. 4.4 Zero-Shot Combined Multi-Attribute Editing Rather than performing serial editing of individual attributes, we showcase our framework\u2019s performance on combined multi-attribute editing, whereby arbitrary unseen users can modify their accent, age, and gender simultaneously in a single forward synthesis pass while preserving their timbre. This capability is enabled by our plug-in modular design that allows concurrent editing on both the speaker embedding and content features of our VS-EN-CN-L18 backbone model. Subjective CMOS Evaluation. We design a modified CMOS study to measure two key qualities of VoiceShop\u2019s multi-attribute editing capabilities: overall editing capabilities, i.e., participants agree that attributes we claim to edit are in fact edited according to their best judgment, and attribute disentanglement, i.e., the modification of any combination of attributes does not alter participants\u2019 perception of other attributes we claim are kept constant. 13 \fTable 7: Results of subjective MOS evaluation of style conversion measuring conversion strength of three styles (\u201cSarcastic Youth,\u201d \u201cFormal British,\u201d and \u201cCartoon Character\u201d). Speech Type Style MOS (\u2191) Non-Stylized Input 1.57\u00b10.10 Model Output 3.83\u00b10.13 Same Speaker and Style 4.05\u00b10.12 Table 8: Results of style conversion objective evaluation, measuring speaker similarity. We denote the number of individual samples used to calculate each value in parentheses. Speech Type ASV (\u2191) Non-Matching Speaker 0.168\u00b10.012 (420) Model Output 0.492\u00b10.015 (210) Same Speaker and Style 0.903\u00b10.003 (210) Table 9: Results of multi-attribute editing subjective CMOS evaluation. The accuracy indicates the proportion of participants who correctly completed each task. We denote the number of responses for each task in parentheses. Some tasks have overlapping samples. Category Task Accuracy (\u2191) Correctly observe when an attribute has been edited Correctly observe accent editing 89.38% (160) Correctly observe age editing 92.50% (160) Correctly observe gender editing 96.88% (160) Correctly observe that any attribute has changed, regardless of which attributes are edited 92.92% (480) Correctly observe when an attribute has not been edited Correctly observe other attributes have not changed when editing accent 80.62% (160) Correctly observe other attributes have not changed when editing age 99.38% (160) Correctly observe other attributes have not changed when editing gender 86.25% (160) Correctly observe other attributes have not changed when editing any attribute 87.78% (360) Overall accuracy Correctly observe when any attribute has or has not been edited 90.71% (840) To this end, we generate samples using two source speakers and perform all permutations of multiattribute editing for one, two, and three attributes (i.e., edit accent, age, and gender individually, edit accent and age, accent and gender, and age and gender simultaneously, and edit all three attributes simultaneously). For each transformation, participants are provided with input and output speech pairs and are asked to identify which of the samples is more accurately described by a text description for each attribute, where they may select \u201cneither\u201d if they believe neither sample is described by the text (e.g., if given a sample of a male speaker where we only perform accent conversion, we would like participants to correctly select \u201cneither\u201d if asked which sample sounds like it was more likely spoken by a female speaker). An additional advantage of including \u201cneither\u201d as a response is that listeners are not obliged to give answers favorable to our system if the editing strength is not strong enough to warrant a clear preference. This design ensures that participants not only agree with the correct descriptions of provided samples, but also that they can confidently identify when the descriptions are incorrect. We recruit 20 anonymous native English speakers balanced across genders using the Prolific crowdsourcing platform and summarize our findings in Table 9 of 42 input-output pairs. We observe the majority of participants can correctly identify when any attribute has or has not been edited, suggesting that our approach is capable of strong, disentangled combined multi-attribute editing in a single forward pass. Objective Evaluation. We first experiment by editing age and gender using out-of-domain speech samples from VCTK by randomly selecting three speech samples for each of the 109 speakers in this dataset, which are also used by NANSY++ [Choi et al., 2022]. We use the attribute predictor in 14 \f\u00a73.3.1 to analyze the age and gender distributions of VCTK samples before and after our attribute editing, with results shown in Figure 6. We demonstrate that our framework achieves competitive age and gender editing strength, despite that our CNF is only trained on weakly-supervised public attribute datasets. For this comparison, we use our VS-EN-L18 backbone model. 5 Conclusion In this work, we presented VoiceShop, a novel speech-to-speech framework that enables the modification of multiple speech attributes while preserving the input speaker\u2019s timbre in both monolingual and cross-lingual settings. It overcomes the limitations of previous models by enabling simultaneous editing of attributes, providing zero-shot capability for out-of-domain speakers, and avoiding timbre leakage that alters the speaker\u2019s unedited attributes in voice editing tasks. It additionally provides a new assortment of methods to tackle the under-explored voice editing task. While VoiceShop offers new capabilities for fine-grained speaker attribute editing, an apparent limitation of our work is that downstream tasks are still bounded by the quality of supervised data. For example, due to the highly imbalanced age distribution of our CNF attribute dataset, editing performance becomes limited as we reach under-represented age ranges, and the perceived naturalness of our BN2BN output is capped by that of the synthetic speech used for training. Furthermore, while we showcase cross-lingual synthesis capabilities, we are still constrained to English and Mandarin content, motivating the exploration of universal speech representations that generalize to all languages, which we leave to future work. 6 Ethical Considerations As with all generative artificial intelligence systems, the real-world impact and potential for unintended misuse of models like VoiceShop must be considered. While there are beneficial use cases of our framework, such as providing entertainment value or lowering cross-cultural communication barriers by allowing users to speak other languages or accents in their own voice, its zero-shot capabilities could enable a user to generate misleading content with relative ease, such as synthesizing speech in the voice of an individual without their knowledge, presenting a risk of misinformation. This concern is not unique to VoiceShop and motivates continued efforts towards developing robust audio deepfake detection models deployed in tandem with generative speech models [Guo et al., 2024, Zang et al., 2024, Zhang et al., 2023b] or the incorporation of imperceptible watermarks in synthesized audio signals to classify genuine or generated speech [Cao et al., 2023, Chen et al., 2024, Juvela and Wang, 2024, Liu et al., 2023a, Roman et al., 2024], although this subject is outside the scope of this work. In an effort to balance the need for transparent, reproducible, and socially responsible research practices, and due to the proprietary nature of portions of data used in this work, we share the details of our findings here, but do not plan to publicly release the model checkpoints or implementation at this time. The authors do not condone the use of this technology for illegal or malicious purposes. (a) Age editing (b) Gender editing Figure 6: Predicted age and gender distributions after: (a) editing age only; (b) editing gender only. We show that not only does the edited attribute shift to the desirable direction with varying editing signal strength but also the unedited attribute remains similar. 15" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.10614v1", |
| "title": "Emergent intelligence of buckling-driven elasto-active structures", |
| "abstract": "Active systems of self-propelled agents, e.g., birds, fish, and bacteria, can\norganize their collective motion into myriad autonomous behaviors. Ubiquitous\nin nature and across length scales, such phenomena are also amenable to\nartificial settings, e.g., where brainless self-propelled robots orchestrate\ntheir movements into spatio-temportal patterns via the application of external\ncues or when confined within flexible boundaries. Very much like their natural\ncounterparts, these approaches typically require many units to initiate\ncollective motion such that controlling the ensuing dynamics is challenging.\nHere, we demonstrate a novel yet simple mechanism that leverages nonlinear\nelasticity to tame near-diffusive motile particles in forming structures\ncapable of directed motion and other emergent intelligent behaviors. Our\nelasto-active system comprises two centimeter-sized self-propelled microbots\nconnected with elastic beams. These microbots exert forces that suffice to\nbuckle the beam and set the structure in motion. We first rationalize the\nphysics of the interaction between the beam and the microbots. Then we use\nreduced order models to predict the interactions of our elasto-active structure\nwith boundaries, e.g., walls and constrictions, and demonstrate how they can\nexhibit intelligent behaviors such as maze navigation. The findings are\nrelevant to designing intelligent materials or soft robots capable of\nautonomous space exploration, adaptation, and interaction with the surrounding\nenvironment.", |
| "authors": "Yuchen Xi, Trevor J. Jones, Richard Huang, Tom Marzin, P. -T. Brun", |
| "published": "2024-04-16", |
| "updated": "2024-04-16", |
| "primary_cat": "cond-mat.soft", |
| "cats": [ |
| "cond-mat.soft", |
| "nlin.AO" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "Emergent intelligence of buckling-driven elasto-active structures", |
| "main_content": "INTRODUCTION The study of active matter, living or inert, focuses on understanding the mechanical and statistical properties of systems comprising elements capable of converting energy into movement. The field is particularly interested in identifying the principles governing the emergence of self-organized spatio-temporal patterns on scales larger than individual motile units. Examples range from liquid-crystalline order in bacterial flocks to polar order in a school of fish[1]. While common in nature, active matter systems are also amenable to artificial laboratory systems[2]. Exploring model experimental systems allows a careful investigation of the inner workings of active matter, particularly identifying the onset of collective behaviors and rationalizing pattern formation within bulk ensembles of active particles. Historically, the field has focused heavily on fluids and fluid-like systems[1], making active elastic systems comparatively less explored[11]. In recent years, self-propelled microbots, e.g., Hexbug Nano\u00ae[12], have been identified as a tunable and reliable means for developing active structures, e.g., oscillatory tails[13] and active elastic solids[14]. The motion of individual microbots is understood as vibrating masses whose frictional contacts cause propulsion[15\u201317], which can be modeled as self-propelled particles that follow Langevin dynamics on timescales much longer than the vibration frequency of their body. This approach allows for the modeling of microbots dynamics in confined geometries[3, 18] or in a harmonic trap[19]. The collective behavior of such microbot systems has received particular attention [3\u20136]: in bounded and crowded environments, these microbots can display a gas-like \u2217pbrun@princeton.edu 2 \fbehavior[9, 10] or cluster around the edges of boundaries[3\u20135]. In addition, external cues such as light and magnets can be used to control such robotic swarms, e.g., to form clusters or direct movements[7, 8]. However, such methodologies are still in their infancy so that finding means to effectively and efficiently control such microbot systems remains an ongoing effort essential to developing robotic matter capable of achieving autonomous, predictive, and tunable motions. Here, we introduce a new form of autonomous physical behavior by coupling active particles with nonlinear elasticity. Fig. 1(a) illustrates our approach involving two self-propelled microbots connected by an elastic polyester beam. We operate in a regime where the active force exerted by the microbots is sufficient to buckle the connecting beam, thereby aligning the microbors and allowing this contraption, coined bucklebot, to move across a flat substrate. While individual microbots remain trapped in a confined space for prolonged periods (Fig. 1(b)), a bucklebot manages to solve a maze efficiently, as evident in Fig. 1(c) and Movie S1. Combining experiments and theory, we elucidate the physics governing the dynamics of these bucklebots. We then explore the interaction of bucklebots with physical boundaries, e.g., plane walls and narrow constrictions. Finally, we leverage these quantitative results to elucidate how bucklebots can develop emergent intelligent behaviors such as solving a maze, probing a path, or organizing disperse particles. (a) (b) time (s) Finish Start (c) 5 0 10 15 20 25 time (s) 5 0 10 15 20 25 FIG. 1. From mindless particles to emergent intelligence (a) Photograph of the bucklebot, showing two microbots connected by a thin polyester beam. (b) Individual microbot trajectory in a confined space. (c) A bucklebot efficiently navigates a maze within 25 seconds. The dashed area in (c) matches the space shown in (b). (all scale bars are 50 mm in length, and trajectories are color-coded by time). 3 \fII. RESULTS A. bucklebot characterization 0.5 1 5 10 50 100 500 0.0 0.2 0.4 0.6 0.8 1.0 0.5 1 5 10 50 100 500 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1 2 1 1.4 1 2 3 (d) (a) (i) (ii) (b) (c) 0 250 500 80 160 thickness (mm) 0.254 0.191 0.102 \u03c0/4 \u03c0/2 0 250 500 thickness (mm) 0.254 0.191 0.102 FIG. 2. Dynamics and characterization of bucklebots: (a) (i): Timelapse of a bucklebot with \u2206t = 0.1s (scale bar=50 mm); (ii) Bucklebot dynamics obtained by integrating our model (SI Section A-C). (b) Rescaled velocity V/Vf and (c) bending angle \u03c8 versus rescaled force F\u21132/B. The black line represents the predicted steady-state solution with \u03bb \u22430 (SI Section D). Inset: bucklebot velocity V and bending angle \u03c8 plotted against beam length \u2113for three beam thicknesses. Lines represent the steady-state solutions of Eqns. 3a-3b in SI (d) Log-log plot of the mean squared displacement (MSD) versus time for a single microbot (blue), a bucklebot with F\u21132/B \u224340 (orange), and a bucklebot with F\u21132/B \u22431000 (black). Inset: MSD exponents for bucklebots versus F\u21132/B. Figure 2 summarizes the main results pertaining to bucklebots evolving in free space. In Fig. 2(a), we show the onset of their motion. Namely, when released, the microbots progressively bend the beam that connects them before assuming a final steady-state configuration characterized 4 \fby a bending angle \u03c8 and a steady-state velocity V , reached after nearly a second. In Fig. 2(b)-(c), we show the variation of these observables when the length and thickness of the beam are varied. For relatively short and thick (thus stiff) beams, the angle \u03c8 remains close to zero, and the structure barely moves. For longer and thinner (thus soft) beams, the force exerted by the microbots is sufficient to buckle the beam, increasing \u03c8 until the limit value of \u03c0/2 is approached. At this point, the microbots are parallel, facing the same direction and moving at a speed close to their free velocity Vf . The value of Vf is typically related to the force exerted by the microbots and the friction between the structure and the substrate, Vf = F/\u03b3 where F is the microbot force and \u03b3 is the effective drag coefficient acting on the microbots[20]. We recast our experimental data in dimensionless form using Vf as our speed gauge and B/\u21132 as the force gauge that captures the beam resistance to bending, where B is the bending stiffness and \u2113is the length of the beam. In Fig. 2(b)-(c), we show that our data collapse to a single master curve, confirming the relevance of rescaled force F\u21132/B in predicting the system behavior. Our experiments show a non-zero velocity and bending angle even for small values of F\u21132/B. While the microbots cannot buckle the beam, the bucklebot slides or rotates slowly due to the vibrations from the motor. As F\u21132/B increases, both \u03c8 and V increase until they reach a plateau around F\u21132/B \u224350. Overall, this transition and the overall variation in geometry and speed are favorably recovered by our model, which is obtained by combining the Kirchhoff equations for elastic beams with a force and moment balance for microbots (See SI Eqn. 1-2). The difference between experiment and theory is attributed to a finite size effect: the microbots are not point masses, so a third dimensionless number \u03bb = L/\u2113is introduced to describe their length relative to that of the beam. In the limit case where \u03bb \u22430, the transition between static and translation occurs at F\u21132/B \u224310, in agreement with Euler\u2019s critical load for column ends with hinge-hinge boundary conditions[21]. In contrast, for larger values of \u03bb, the microbots exert higher lever-arm torques onto the beam, thereby diminishing the critical buckling load (See SI Section E). Having understood the shape and instantaneous velocity of our bucklebots we move to describe their long-term behavior. In Fig.2(d) we calculate their mean square displacement MSD = \u27e8|r(t) \u2212r(0)|2\u27e9, where r(t) is the position vector at time t, and \u27e8\u00b7\u27e9denotes the average value over of all recorded trajectories. In Fig. 2(d), we plot the MSD of two bucklebots with rescaled forces of 40 and 1000 together with that of single microbots. Single microbots show diffusivelike behavior resembling a noisy walker[22] with a reorientation time \u03c4 \u22431.3 s and long-term MSD \u221dt1.4. In contrast, the bucklebots with F\u21132/B \u224340 translate ballistically (MSD \u221dt2) in the range of the time (>5\u03c4) we probed. Bucklebots achieve persistent directed motion despite the direction changes typically observed in each unit and their inevitable differences. This result remains true for 10 < F\u21132/B < 600, where similar behaviors are observed (see inset). However, past this upper limit, bucklebots demonstrate slower movement and cover two orders of magnitude smaller areas throughout the measurement, as evident from the orange line. In such high-force regimes, the beam\u2019s internal resistance to bending is negligible and thus insufficient to align with the motions of the microbots. The microbots tend to buckle the beam to its second 5 \f(and higher) buckling modes, so the bucklebot rotates while slowly translating (See Movie S2). In the following, we focus on bucklebots with F\u21132/B within the range from 10 to 600 and probe their interactions with boundaries. (a) (b) 2 1 (c) (d) (i) (ii) FIG. 3. Bucklebots interacting with boundaries (a) Overalyed photographs of bucklebot with F\u21132/B \u224360 approaching a flat wall with angle \u03b1, following the wall for some time \u03c4r, and bouncing off with a reflection angle \u03b2 (scale bar=50 mm). (b) Residence time, \u03c4r versus F\u21132/B for three sets of \u03b1. Markers represent experiments (triangles: \u03b1 = \u03c0/6, squares: \u03b1 = \u03c0/3, diamonds: \u03b1 = 4\u03c0/9). Lines are the predictions from the self-oscillation model (see SI Section G). The error bars represent the standard deviation of \u03c4r for each bucklebot. The inset shows \u03b2 versus \u03b1. (c) Snapshots of passage through a slit of width \u03b4 = 6 cm. (i): a bucklebot with F\u21132/B \u2243140 and (ii) with F\u21132/B \u224313 (scale bar=50 mm). (d) Success of passage through the slits over ten launches, shown as a function of rescaled gap size \u03b4/\u2113and F\u21132/B. The experiment data is color-coded by the success rate of passage, as shown by the right color bar. The dashed line indicates the equilibrium width of the free bucklebot (see SI Section D), and the solid line corresponds to our model (SI Eqn. 27). The shaded gray area is our prediction for the region where the bucklebots are expected to bounce off from the slit. We first turn our attention to the interaction of a bucklebot with a plane boundary (see Fig. 3(a)). The bucklebot approaches the wall with an angle \u03b1 and is found to follow the wall for some residence time \u03c4r before reflecting off with an angle \u03b2. In Fig. 3(b), we find that the reflection angles \u03b2 are consistently around \u03c0/2, irrespective of the value of \u03b1. However, the residence time \u03c4r increases as \u03b1 decreases. Shallower approaches stay longer along the wall than a direct 6 \fhit. Additionally, we find that \u03c4r \u221d p F\u21132/B. To rationalize such a scaling law, we observe that the microbot in contact with the wall is typically slower than the other one, presumably because of the added friction. As such, the faster outer microbot overtakes its slower counterpart and forces the beam to snap (See Movie S3). Inspired by such behavior, we introduce the limit case scenario, where one single microbot is attached to an elastic beam clamped on one end. We model the ensuing oscillatory dynamics (See SI Eqns. 28-29) and recover the scaling law observed in experiments, as indicated by the solid lines in Fig. 3(b). Our model underpredicts our data since, in our experiment, the bucklebot at the wall is not clamped but instead slides, thereby delaying the beam\u2019s oscillation. Next, we turn to study the passage of a bucklebot through constrictions. Figure 3(c)(i) illustrates the bucklebot ability to deform and pass a tight slit with opening \u03b4 < w, with w the bucklebot width. If the beam is too stiff or the slit is too small, the buckle-bot will bounce off the constriction (See Figure 3(c)(ii)). Those results are formalized in Figure 3(d), where we report the probability of successful passage as a function of the gap size rescaled by the beam length, \u03b4/\u2113, and the rescaled force, F\u21132/B. As evident from the figure, larger slits, and larger forces correlate with a higher probability of successful passage. In red, we show the bucklebot equilibrium width w/\u2113. The region below (resp. above) w/\u2113indicates slits smaller (resp. larger) than the equilibrium width. All the trials above this curve have a 100% chance of passing (we send our robots straight onto the slit). However, a sizable region below the curve also sees significant success. We rationalize this region boundary of such success by considering the minimal length the microbots can bend the structure, i.e., \u03c0 p B/F, which coincides with the width of the smallest slits that bucklebots can pass. III. DISCUSSION To summarize, our bucklebots, consisting of two self-propelled microbots coupled by a soft elastic beam, achieve persistent ballistic motions, follow walls, and squeeze their deformable structures through narrow constrictions. The combination of these unique capabilities allows them to perform tasks that individual microbots cannot achieve, such as solving a maze (Fig. 1(c)). In the remaining, we leverage these emergent abilities and demonstrate that the bucklebots can accomplish a broad range of tasks. When sent into a closed path, a bucklebot will navigate to the closed end, bounce back, and reappear at the starting point (see Movie S4). In Fig. 4(a), we show that the ratio between the length traveled by the robots rescaled by the length of the path. While individual microbots travel on average nearly 4 times more than necessary (with nearly 100% variability between trials), we find that our bucklebots converge to the optimal path as F\u21132/B increases (while dramatically reducing variability). In this limit, our bucklebots can be used to probe and classify 7 \f(d) (a) (b) (c) Send in 0 100 200 300 400 500 600 0 1 2 3 4 5 6 in out in out 13 s 23 s ? 0 50 100 150 0 10 20 30 40 50 FIG. 4. Bucklebots probing a closed path and \u201dstoring\u201d a room: (a) Traveled length over actual path length is plotted for bucklebots with a wide range of F\u21132/B. Error bars show the standard deviation of bucklebots\u2019 traveled lengths. The solid blue line shows the benchmark for a single microbot and the shaded blue area is its error range. (b) The left snapshot shows a probing experiment: a bucklebot with F\u21132/B \u2243560 is sent into a covered closed path. The schematic drawings show two paths (longer/shorter) the bucklebots can probe and differentiate. In 14 and 25 seconds, the bucklebot reappears at the starting point of the shorter and longer path, respectively. (c) Snapshots of the evolution of a confined room stored by the same bucklebot. The black circles denote isolated obstacles and the green boundaries correspond to formed clusters. (d) The number of elements representing single or connected obstacles is plotted against time in the case of two single microbots and a bucklebot with F\u21132/B \u2243380. Each shaded area denotes the standard deviation within 5 trials. Two black lines are the fit derived from the coagulation theory. simple structures (see Fig. 4(b), where the identification is achieved by recording the entry and exit times). Likewise, bucklebots differ from the behavior of individual microbots when interacting with obstacles they can displace. In Fig. 4(c), we report a few snapshots of a bucklebot confined with 8 \finitially dispersed cylindrical obstacles (N0 = 50). We find that the bucklebot (F\u21132/B \u2243380) pushes the light obstacles and assembles them into clusters. The number of elements saturates in about a minute. In Fig. 4(d), we report the dynamics of cluster formation for this bucklebot and contrast it with the situation where two microbots freely travel into a similar enclosure. In both cases, we observe an initial decrease in dispersed elements, N, before reaching saturation. Bucklebots store nearly 76% of obstacles into clusters, while two microbots only store 38% of them. Further differences arise when fitting the data with a Smoluchowski-like equation for coagulation[23], N(t) = N0/(1 + t/\u03c4). The corresponding coagulation time scale \u03c4 indicates a faster decay for the bucklebot (\u03c4 = 23.3s) than for single agents (\u03c4 = 49s). Additionally, bucklebot interacts more gently with the clusters than single microbots, preventing damage and thus facilitating the formation of larger clusters. The distance between these assemblies is about w, the bucklebot width (Fig. 4c). We have shown that stochastic self-propelled active particles coupled with nonlinear elasticity can be tamed and forced into ballistic motion and display various emergent abilities as they interact with different boundaries. These autonomous elasto-active structures carry out all these tasks without directed control. Instead, our elastic model can rationalize and capture these behaviors. We have demonstrated that this newly gained understanding can be leveraged to achieve and control complex tasks, such as maze navigation, probing the length of a path, and collecting cylinders. Our work on these elasto-active structures thus opens a new pathway for designing soft robotic systems that can adjust and adapt to their surroundings without human intervention. ACKNOWLEDGMENTS It is a pleasure to acknowledge Antoine Deblais and Thomas Barois for helpful discussions, as well as funding from NSF through grants NSF CMMI 2343539, CMMI FMRG 2037097, and financial support from the Princeton High Meadows Environmental Institute. IV. MATERIALS AND METHODS A. Bucklebot design and manufacturing Our active agents are commercially available battery-powered vibrating microbots (Hexbug Nano). Each microbot has a length of 45 mm, a width of 15 mm, a height of 15 mm, and a mass of 7.5 g. Its motion is generated from an internal vibration of a rotating motor transmitted 9 \fto 12 soft rubber legs to achieve a speed of approximately 154 \u00b1 15 mm/s. The beams are cut from shim stocks using a laser cutter (Epilog Helix-60 Laser engraver). The shim stocks are made of polyester with an elastic modulus of 2 GPa. The thickness and length of such elastic beams are well calibrated to ensure a variation of bending stiffness used in experiments. The collar that is used to connect microbots with elastic beams is designed by Rhino and 3D printed by Prusa i3 printer using poly-lactic acid (PLA) (density \u03c1 = 1.2 g/cm3 and elastic modulus E = 5 GPa). The beams are clamped to the collars using Dodge 0-80 .115 inch length inserts and corresponding screws. B. Experimental setups and bucklebot tracking The active force exerted by the microbot is estimated by measuring its pushing force via an Instron 10N load cell. The active force is measured to be 20 \u00b1 3 mN. We choose the microbot pairs with approximately the same free velocity and active force to ensure experiment consistency. It is worth noting that the microbot\u2019s manufacturing defects and component variabilities give rise to its biased motion. Experimentally, a biased microbot performs a circular motion, whose radius is given by R = vf /\u03c9b, where \u03c9b is the angular rotation rate. We adopt the criteria from Baconnier et al. [24] and choose the microbots that are not noticeably biased. All experiments are carried out on an acrylic surface. For bucklebots, we change the two microbots\u2019 batteries simultaneously to maintain their same relative battery level throughout the experiments. To capture the motion of the microbots and the bucklebots, a Canon EOS 80D camera is held by a frame looking down at a large white cast acrylic sheet from McMaster-Carr on top of the lab table. To track these robots while effectively differentiating each individual from one another, we use binary square fiducial markers, known as ArUco markers, which are synthetic square markers composed of a wide black border and an inner binary matrix that determines its identifier (id). We print out markers with different IDs and attach them to each microbot present in the experiments. With Python\u2019s Open Source Computer Vision (OpenCV) package[25], we post-process the recorded videos by tracking the attached markers\u2019 position data (x, y, t) with time. For example, our code detects the position (x, y) of the marker\u2019s four corners. We calculate the mid-point positions of opposite edges on each marker, which allows us to obtain the orientation vector of the microbots. In addition, the velocity of a single microbot is measured by multiplying its position displacement of consecutive frames with frames per second (fps), which allows us to further calculate the mean velocity by averaging the marker\u2019s velocities over time. We estimate a bucklebot\u2019s center of mass position as the line\u2019s center point that connects the two marker centers. 10 \fV. BUCKLEBOT MODEL a b c collar microbot clamp beam inceptive end terminal end L L FIG. 5. bucklebot model schematic (a) bucklebot traveling in the y-direction. (b) Schematic of a beam. (c) Shematic of a microbot with a collar that clamps at the front. We describe an analytical model of the bucklebot that couples the beam dynamics and the microbot\u2019s self-propelled motion. We begin by introducing the beam equations and self-propelled microbot equations, rescaling the beam and microbot equations by common length, time, and force scales, and discussing the four dimensionless groups that describe the general dynamics. We then discuss the mathematical constraints imposed by the collar that clamps the beam to the microbots in the bucklebot configuration. Next, we justify and introduce the time-stepping algorithm used to simulate the bucklebots. Finally, we derive the analytic results that provide predictions for the bucklebot velocity, shape, and onset of buckling. A. Equations and rescaling The beam dynamics can be described by the 2D Kirchhoff equations \u2202n \u2202s = \u03c1b \u22022r \u2202t2, (1a) \u2202m \u2202s + et \u00d7 n = 0, (1b) with the constitutive equation m = B\u2202\u03b8 \u2202s . (1c) Here n (s, t) is the internal force, m (s, t) is the internal moment, s \u2208[0, \u2113] is the arc-length position along the beam, t is time, r (s, t) is the center-line position, and \u2202r \u2202s =et (s, t) = {cos \u03b8, sin \u03b8} 11 \fis the unit tangent with \u03b8 (s, t) the angle between the beam tangent et and x-axis ex. \u03c1b, and B are material parameters representing the beam linear density and bending stiffness, respectively. The self-propelled motion of the microbots obeys the force and moment balance: M d2x dt2 + \u03b3 dx dt = Fe\u2225+ R, (2a) I d2\u03c8 dt2 + \u0393d\u03c8 dt = 1 2L \u0000e\u2225\u00d7 R \u0001 + Q. (2b) where x (t) is the center-of-mass, e\u2225(t) = {cos \u03c8, sin \u03c8} is the unit orientation vector aligned along the long-axis of the microbot, and \u03c8 (t) the angle of the orientation vector e\u2225with x-axis ex. M, I, F, \u03b3, \u0393, and L are material and geometric parameters of the microbot representing the mass, moment of inertia, driving force, translational damping coefficient, rotational damping coefficient, and length respectively. R (t) and Q (t) are the reaction force and reaction torque acting on the microbots with 1 2L \u0000e\u2225\u00d7 R \u0001 being the moment of force from the reaction force being applied on the collar away from the microbot center of mass. The bending stiffness B, mass M, driving force F, translational friction coefficient \u03b3, beam length \u2113, and microbot (plus collar) length L are measured in the lab as described in the Materials and Methods section. The linear beam density \u03c1b is taken from manufacturer data and beam geometry. The moment of inertia I = RR R \u03c1Hr 2 dA and the rotational damping coefficient \u0393 = RR R \u03c1flr 2 dA are derived quantities from the microbots area mass density \u03c1H and damping density \u03c1fl as well as the distance from the center of mass r over the microbot body R. For simplicity, we assume a rod-shaped body W \u226aL with uniform densities \u03c1H = M LW and \u03c1fl = \u03b3 LW such that I \u2248 1 12ML2 and \u0393 \u2248 1 12\u03b3L2. Rescaling the length by the beam length {r, x, s} = {r/\u2113, x/\u2113, s/\u2113}, the time by {t}={t/ p M\u21133/B}, the force by {F, R} = {F\u21132/B, R\u21132/B}, the moment by {m, Q} = {m\u2113/B, Q\u2113/B}, and taking I = 1 12ML2 and \u0393 = 1 12\u03b3L2, we arrive at the dimensionless equations for the beam: \u2202n \u2202s = M\u22022r \u2202t2, (3a) \u2202m \u2202s + et \u00d7 n = 0, (3b) m = \u2202\u03b8 \u2202s , (3c) and the microbots: d2x dt2 + \u03b6dx dt = Fe\u2225+ R, (4a) L2 12 \u0012d2\u03c8 dt2 + \u03b6d\u03c8 dt \u0013 = L 2 \u0000e\u2225\u00d7 R \u0001 + Q. (4b) The rescaled and rearranged Eqs. 3-4 introduce four dimensionless groups: M = \u03c1b\u2113/M, L = L/\u2113, F = F\u21132/B, and \u03b6 = \u03b3/ p MB/\u21133. M and L are self-explanatory as they compare 12 \fthe mass and lengths, respectively. In our system, the microbot is always much heavier than the beam such that M \u226a1, and consequently, the beam dynamics are quasi-static. If the microbot were to have zero length L = 0, the torque balance (Eqn. 4b) would simplify to Q = 0 such that the microbots would not resist moments. The microbots, in this case, could be considered point particles with an orientation-dependent driving force. F represents an elasto-active number that compares the driving force of the microbot to the beam\u2019s resistance to deformation. When F is large, the microbot easily deforms the beam, and when F is small, the beam remains undeformed. The dimensionless group \u03b6 is the damping ratio of the microbot with a spring constant B/\u21133. In the actual range of parameters tested, \u03b6 \u226b1 so that we may assume overdamped microbot dynamics. B. Bucklebot constraints The governing equations for the beam (Eqs. 1) are coupled to the governing equations for the microbots (Eqs. 2) due to the clamping between the beam ends and the microbot collars. The beam is clamped to two microbots at its inceptive (s = 0) and terminal (s = \u2113) ends as shown in Fig. 5. To distinguish the two microbots we use the subscripts (\u00b7)0 and (\u00b7)\u2113to represent the inceptive and terminal end microbots, respectively. The relations summarize the coupling: r (s=0) = x0 + L 2 \u0000e\u2225 \u0001 0 , r (s=\u2113) = x\u2113+ L 2 \u0000e\u2225 \u0001 \u2113, (5a) et (s=0) = \u0000e\u2225 \u0001 0 | {z } i.e., \u03b8=\u03c80 , et (s=\u2113) = \u2212 \u0000e\u2225 \u0001 \u2113 | {z } i.e., \u03b8=\u03c8\u2113\u2212\u03c0 , (5b) n (s=0) = R0, n (s=\u2113) = \u2212R\u2113, (5c) m (s=0) = Q0, m (s=\u2113) = \u2212Q\u2113. (5d) Or, if we recast Eqs. 5 in a dimensionless form consistent with Eqs. 3\u20134 and where the terminal end microbot variables are denoted with the subscript (\u00b7)1: r (s=0) = x0 + \u03bb 2 \u0000e\u2225 \u0001 0 , r (s=\u2113) = x1 + \u03bb 2 \u0000e\u2225 \u0001 1 , (6a) et (s=0) = \u0000e\u2225 \u0001 0 | {z } i.e., \u03b8=\u03c80 , et (s=1) = \u2212 \u0000e\u2225 \u0001 1 | {z } i.e., \u03b8=\u03c81\u2212\u03c0 , (6b) n (s=0) = R0, n (s=1) = \u2212R1, (6c) m (s=0) = Q0, m (s=1) = \u2212Q1. (6d) 13 \fC. Time-stepping algorithm Eqs. 3a\u20134b are solved using a time-stepping algorithm with initial and boundary conditions in Mathematica to generate Fig. 2(a) and compare with experiments. Specifically, we start with two microbots at particular orientations whose fronts face each other. We take them as boundary conditions and use a numerical shooting method to solve for the beam\u2019s moment and force, n and m, respectively. With the beam moment and force matching R and Q, we calculate the translational and rotational accelerations using Eqs. 4a-4b. We then multiply accelerations over \u2206t to get new velocities and velocities over \u2206t to get new positions and orientations. And we are at the next time step with new positions and orientations to solve for the beam shape, moment, and force. D. Steady-state solutions for bucklebots To capture the bucklebots\u2019 final shape and velocity, we solve the beam shape and the steadystate translation of the microbots simultaneously. For the bucklebot motion, we assume a purely transverse motion in the ey direction so that traveling velocity is v=vyey. Here we assume the microbots have equal velocity v=v1=v2, zero angular velocity \u03c9=0. We assume the beam is quasistatic \u2202n \u2202s =0 such that the reaction moments are equal R=R0=R1. Further, we assume reflective symmetry about the transverse velocity direction. Therefore, the orientation angles of the microbots may be described as \u03c8=\u03c80=\u03c0\u2212\u03c81. As a result, the force and moment balances (eqs. 2) for the two attached microbots of a bucklebot at steady-state are simplified as: 0 = F cos \u03c8 + Rx, (7a) vy = F sin \u03c8 + Ry, (7b) 0 = 1 2\u03bb(Ry cos \u03c8 \u2212Rx sin \u03c8) + Q0, (7c) 0 = \u2212F cos \u03c8 \u2212Rx, (8a) vy = F sin \u03c8 \u2212Ry, (8b) 0 = 1 2\u03bb(Ry cos \u03c8 + Rx sin \u03c8) \u2212Q1, (8c) where eqs. 7 describe the microbot attached at the inceptive end and eqs. 8 describe the microbot at the terminal end of the beam. Note the sign differences result from the way we 14 \fdefine the hexbug angle symmetry \u03c8: cos(\u03c0 \u2212\u03c8) = \u2212cos \u03c8. Solving Eqs. 7-8 we get: vy = F sin \u03c8, (9a) Rx = \u2212F cos \u03c8, (9b) Ry = 0, (9c) Q0 = \u22121 2F\u03bb cos \u03c8 sin \u03c8. (9d) Q1 = \u22121 2F\u03bb cos \u03c8 sin \u03c8. (9e) Finally, we match the beam boundary conditions: n(0) = n(1) = \u2212F cos \u03c8ex, (10) m(0) = m(1) = \u22121 2F\u03bb cos \u03c8 sin \u03c8, (11) y(0) = y(1) = 0 (12) \u03b8(0) = \u2212\u03b8(1) = \u03c8. (13) and solve Eqs. 3a-3b to retrieve the steady-state shape (\u03b8(s), x(s), y(s)) of the beam. Plugging \u03c8 = \u03b8(0) in Eqn. 9a, we get the steady-state velocity of the bucklebot. The equilibrium width of the bucklebot, which appears in Fig. 3(d), is defined as x(1) \u2212x(0). E. The relative length of the microbot to beam affects the onset of buckling In Fig. 2(b)-(c)\u2019s theory curves, we plot the limit case \u03bb \u22430 where microbots are regarded as point masses. In experiments, however, this is not the most accurate case, and we introduced the third dimensionless number \u03bb = L/\u2113. Here, we further discuss how \u03bb affects the behavior of bucklebots at the onset of buckling. Eqn. 3b can be expanded as: m\u2032 + nx sin \u03b8 \u2212ny cos \u03b8 = 0 (14) At the onset of buckling, we take ny \u22430 and nx = F cos \u03b8, so that the above equation can be rewritten as: m\u2032 + \u00b52y \u2032 = 0 \u21d2m + \u00b52y = m0 (15) where \u00b5 = \u221a F cos \u03b8 and m0 = \u22121 2F\u03bb cos \u03b8 sin \u03b8. Taking m = \u03b8\u2032 and linearizing about small \u03b8 \u226a1 such that \u03b8 \u2243dy/dx we get an ordinary second-order differential equation with boundary conditions: y \u2032\u2032 + \u00b52y = m0, (16) y(0) = 0 and y(1) = 0. (17) 15 \f0 2 4 6 8 10 0 1 2 3 4 5 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. 0 2 4 6 8 10 0 1 2 3 4 5 0. 0.13 0.26 0.39 0.52 0.65 0.78 0.91 1.04 1.17 1.3 FIG. 6. Bucklebot state diagram The beam buckles and causes a directed velocity (top) and bending angle \u03c8 (bottom) according to the rescaled force and the relative microbot-beam length \u03bb. The red lines correspond to Eqn. 21, and the areas above the line are color-coded by the steady-state solutions described in Section D. with the solution y(x) = \u2212m0(\u22121 + cos(\u00b5x) + sin(\u00b5x) tan(\u00b5/2)) \u00b52 (18) We seek solutions where the microbots rotate at a very small angle \u03c8 = \u03b80 \u03b80 \u2243y \u2032(0) = m0 tan(\u00b5/2) \u00b5 (19) 16 \fPlugging in the definition of \u00b5 and m0 and linearizing about small \u03b80 with a Taylor expansion we get \u03b80 = 1 2\u03bb \u221a F tan( \u221a F/2)\u03b80 + O(\u03b82 0) (20) Therefore, we have an implicit function for the critical force of buckling F = fc, which we plot as the red line in Fig. 6: \u03bb = 2 cot( p fc/2) \u221afc (21) F. Euler\u2019s critical load According to our theory curve in Fig. 2(b)-(c) and prediction from Eqn. 21 , in the limit case of \u03bb \u22430, the critical force of buckling Fc\u21132/B \u2243\u03c02. Here, we mathematically verify this result. Consider a slender column with length \u2113supported at each hinged end with axial forces F applied at each end. A summation of moments about point x along the curve yields: X M = 0 \u21d2M(x) + Fw = 0 (22) where w is the lateral deflection. According to Euler-Bernoulli beam theory, the deflection of the beam can be related to its bending moment by M = \u2212B d2w dx2 so that: d2w dx2 + \u00b52w = 0 (23) where \u00b52 = F B. One can solve this ordinary differential equation with boundary conditions w(0) = w(\u2113) = 0 and yield that \u00b5n = n\u03c0 \u2113, for n \u2208N. Therefore, Fn = n2\u03c02B \u21132 , for n \u2208N (24) Theoretically, the column is more prone to buckle to its first mode because of lower energy [21] so that Fc = \u03c02B \u21132 \u21d2Fc\u21132/(B) = \u03c02 (25) To generate the solid black line in Fig. 3(d), we refer to this derivation to demonstrate the relationship between the bucklebot rescaled force and the slit width \u03b4. From Eqn. 25, the minimal length the microbots can bend the beam \u2113min = \u03c0 p B/F, which should match the width of the smallest slit that the microbot can pass. Therefore, \u03b4 = \u03c0 p B/F \u21d2F\u03b42/B = \u03c02 (26) Multiplying both sides of Eqn. 26 by \u21132 and rearranging we can arrive at \u03b4/\u2113= \u03c02B/F\u21132 (27) 17 \fG. Dynamics of a single tail Analogous to the behavior of the outer microbot we observe in Fig. 3(a), we refer to the self-oscillation of a simple configuration where a single microbot is paired with an elastic beam clamped on one end. This configuration can be dynamically modeled using the same Eqs. 3a-4b but with modified boundary conditions: R = \u2212n(s = \u2113) and Q = \u2212m(s = \u2113) \u00b7 ez for microbot, (28) r(0) = \u03b8(0) = 0 for beam at all time. (29) We solve Eqs. 3a-4b using a time-stepping algorithm described in Section C with different F\u21132/B and extract the period of oscillation to generate the solid lines in Fig. 3(b). For example, in the case of \u03b1 = \u03c0/6, we extract from the modeled dynamics the time it takes for the microbot to oscillate from an initial position \u03c8 = \u03c0/6 to \u03c8 = \u03c0 for different rescaled forces. And we repeat the same procedure for the cases \u03b1 = \u03c0/3 and \u03b1 = 4\u03c0/9. [1] M. C. Marchetti, J.-F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Hydrodynamics of soft active matter, Reviews of modern physics 85, 1143 (2013). [2] S. Michelin, Self-propulsion of chemically active droplets, Annual Review of Fluid Mechanics 55, 77 (2023). [3] L. Giomi, N. Hawley-Weld, and L. Mahadevan, Swarming, swirling and stasis in sequestered bristlebots, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 469, 20120637 (2013). [4] J. F. Boudet, J. Lintuvuori, C. Lacouture, T. Barois, A. Deblais, K. Xie, S. Cassagnere, B. Tregon, D. B. Br\u00a8 uckner, J. C. Baret, and H. Kellay, From collections of independent, mindless robots to flexible, mobile, and directional superstructures, Science Robotics 6, eabd0272 (2021), publisher: American Association for the Advancement of Science. [5] A. Deblais, T. Barois, T. Guerin, P. Delville, R. Vaudaine, J. Lintuvuori, J. Boudet, J. Baret, and H. Kellay, Boundaries Control Collective Dynamics of Inertial Self-Propelled Robots, Physical Review Letters 120, 188002 (2018), publisher: American Physical Society. [6] C. Scholz, M. Engel, and T. P\u00a8 oschel, Rotating robots move collectively and self-organize, Nature Communications 9, 931 (2018), number: 1 Publisher: Nature Publishing Group. [7] N. Sep\u00b4 ulveda, F. Guzm\u00b4 an-Lastra, M. Carrasco, B. Gonz\u00b4 alez, E. Hamm, and A. Concha, Bioinspired magnetic active matter and the physical limits of magnetotaxis (2021), arXiv:2111.04889 [condmat, physics:physics]. [8] S. Li, B. Dutta, S. Cannon, J. J. Daymude, R. Avinery, E. Aydin, A. W. Richa, D. I. Goldman, and D. Randall, Programming active cohesive granular matter with mechanically induced phase changes, Science Advances 7, eabe8494 (2021), publisher: American Association for the Advancement of Science. 18 \f[9] J. F. Boudet, J. Jagielka, T. Guerin, T. Barois, F. Pistolesi, and H. Kellay, Effective temperature and dissipation of a gas of active particles probed by the vibrations of a flexible membrane, Physical Review Research 4, L042006 (2022), publisher: American Physical Society. [10] G. DiBari, L. Valle, R. T. Bua, L. Cunningham, E. Hort, T. Venenciano, and J. Hudgings, Using Hexbugs\u2122to model gas pressure and electrical conduction: A pandemic-inspired distance lab, American Journal of Physics 90, 817 (2022). [11] M. Fruchart, C. Scheibner, and V. Vitelli, Odd viscosity and odd elasticity, Annual Review of Condensed Matter Physics 14, 471 (2023). [12] Hexbug is a toy automate brand developed and distributed by innovation first, http://www.hexbug.com, . [13] E. Zheng, M. Brandenbourger, L. Robinet, P. Schall, E. Lerner, and C. Coulais, Self-Oscillation and Synchronization Transitions in Elastoactive Structures, Physical Review Letters 130, 178202 (2023), publisher: American Physical Society. [14] P. Baconnier, D. Shohat, C. Hernand` ez, C. Coulais, V. D\u00b4 emery, G. D\u00a8 uring, and O. Dauchot, Selective and collective actuation in active solids, Nature Physics 18, 1234 (2022). [15] G. Cicconofri and A. DeSimone, Motility of a model bristle-bot: A theoretical analysis, International Journal of Non-Linear Mechanics 76, 233 (2015). [16] M. Y. Ben Zion, J. Fersula, N. Bredeche, and O. Dauchot, Morphological computation and decentralized learning in a swarm of sterically interacting robots, Science Robotics 8, eabo6140 (2023), publisher: American Association for the Advancement of Science. [17] D. Kim, Z. Hao, A. R. Mohazab, and A. Ansari, On the Forward and Backward Motion of MilliBristle-Bots, International Journal of Non-Linear Mechanics 127, 103551 (2020), arXiv:2002.10344 [cs, eess]. [18] M. Leoni, M. Paoluzzi, S. Eldeen, A. Estrada, L. Nguyen, M. Alexandrescu, K. Sherb, and W. W. Ahmed, Surfing and crawling macroscopic active particles under strong confinement: Inertial dynamics, Physical Review Research 2, 043299 (2020), publisher: American Physical Society. [19] O. Dauchot and V. D\u00b4 emery, Dynamics of a self-propelled particle in a harmonic trap, Physical review letters 122, 068002 (2019). [20] C. A. Weber, T. Hanke, J. Deseigne, S. L\u00b4 eonard, O. Dauchot, E. Frey, and H. Chat\u00b4 e, Long-range ordering of vibrated polar disks, Phys. Rev. Lett. 110, 208001 (2013). [21] B. Audoly and Y. Pomeau, Elasticity and geometry, in Peyresq Lectures on Nonlinear Phenomena (World Scientific, 2000) pp. 1\u201335. [22] \u00b4 Etienne Fodor and M. Cristina Marchetti, The statistical physics of active matter: From selfcatalytic colloids to living cells, Physica A: Statistical Mechanics and its Applications 504, 106 (2018), lecture Notes of the 14th International Summer School on Fundamental Problems in Statistical Physics. [23] S. K. Friedlander et al., Smoke, dust, and haze, Vol. 198 (Oxford university press New York, 2000). [24] P. Baconnier, Active elastic solids : collective motion, collective actuation & polarization, Theses, Universit\u00b4 e Paris sciences et lettres (2023). [25] G. Bradski, The OpenCV Library, Dr. Dobb\u2019s Journal of Software Tools (2000). 19" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13011v1", |
| "title": "A multigrain-multilayer astrochemical model with variable desorption energy for surface species", |
| "abstract": "Context. Interstellar surface chemistry is a complex process that occurs in\nicy layers accumulated onto grains of different sizes. Efficiency of surface\nprocesses often depends on the immediate environment of adsorbed molecules.\nAims. We investigate how gas-grain chemistry changes when surface molecule\ndesorption is made explicitly dependent to the molecular binding energy, which\nis modified, depending on the properties of the surface. Methods. Molecular\nbinding energy changes gradually for three different environments - bare grain,\nwhere polar, water-dominated ices and non-polar, carbon monoxide-dominated\nices. In addition to diffusion, evaporation and chemical desorption,\nphotodesorption was also made binding energy-dependent, in line with\nexperimental results. These phenomena occur in a collapsing prestellar core\nmodel that considers five grain sizes with ices arranged into four layers.\nResults. Efficient chemical desorption from bare grains significantly delays\nice accumulation. Easier surface diffusion of molecules on non-polar ices\npromotes the production of carbon dioxide and other species. Conclusions. The\ncomposition of interstellar ices is regulated by several binding-energy\ndependent desorption mechanisms. Their actions overlap in time and space, which\nexplains the ubiquitous proportions of major ice components (water and carbon\noxides), observed to be similar in all directions.", |
| "authors": "Juris Kalvans, Aija Kalnina, Kristaps Veitners", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "astro-ph.GA", |
| "cats": [ |
| "astro-ph.GA", |
| "physics.chem-ph" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "A multigrain-multilayer astrochemical model with variable desorption energy for surface species", |
| "main_content": "Introduction During the last decade, theoretical astrochemists have expanded gas-grain models with additional phases of solid matter. These include reactive icy molecules residing either in di\ufb00erent layers on interstellar grains or on grains of di\ufb00erent sizes. In conjunction with dynamical evolution of dense cores and an improved understanding of microscopic phenomena, the new phases allow to paint a detailed picture about chemical processes in interstellar and circumstellar ices. The microscopic phenomena most notably include e\ufb03cient chemical desorption and molecular desorption (binding) energy that varies depending on its surrounding environment. The above ingredients have never been combined in a single model, which means that current astrochemical models may be missing key processes that regulate ice formation and distribution between solid phases. Multigrain models consider simultaneous surface chemistry on grains with an assortment of sizes. Acharyya et al. (2011) found that the smallest grains, having the largest overall surface, accumulate most of the ice (by default, here we consider the MRN grain size distribution of Mathis et al. 1977, used in most studies). Ice accumulation onto small grains is ampli\ufb01ed by increase of surface area with grain growth, which is most pronounced for the small grains. Pauly & Garrod (2016); Ge et al. (2016); Iqbal & Wakelam (2018), and Chen et al. (2018) considered basic aspects for multigrain models, such as the e\ufb00ect of di\ufb00erent numbers of grain size bins, the applicability of the rate-equation method, e\ufb00ects of di\ufb00erential grain temperature, and ice accumulation. A number of papers have focused on the e\ufb03ciency of cosmic-ray induced desorption for grains of di\ufb00erent sizes (Zhao et al. 2018; Sipil\u00e4 et al. 2020; Kalv\u00af ans & Kalnin 2022). Some of these indicate that ices cannot e\ufb03ciently accumulate on the smallest grains (Silsbee et al. 2021; Rawlings 2022). Some of the multigrain models have been applied in further astrochemical studies (Pauly & Garrod 2018; Gavino et al. 2021). Compared to multigrain models, multilayer models have undergone signi\ufb01cant evolution. They consider at least two layers of the icy mantle that cover grain surfaces (Hasegawa & Herbst 1993b), both of which may be chemically active (Kalv\u00af ans & Shmeld 2010). Current models resolve separate monolayers (MLs; Taquet et al. 2012) or up to six chemically active ice layers (Furuya et al. 2017) with limited-di\ufb00usion approaches for binary bulk-ice reactions (Chang & Herbst 2014). The mere existence of subsurface ice that is isolated from most of the desorption mechanisms acting on the exposed surface molecules is a signi\ufb01cant development. While the chemical activity of the bulk-ice layers and their susceptibility to dissociating radiation is a matter of question, active bulk ice allows for di\ufb00erent molecule synthesis paths, for example, in CO and H2O-dominated environments (Chang & Herbst 2016). Moreover, multilayer models allow to regulate evaporation from ices in protostellar envelopes, either via layer-by-layer removal (Taquet et al. 2014) or by allowing hyper-volatiles to di\ufb00use out of the mantle \ufb01rst (Garrod 2013a, with hyper-volatiles here we understand icy species with desorption energies ED below about 1300K, such as H2, N2, O2, CO, CH4). Article number, page 1 of 15 \fA&A proofs: manuscript no. Edese Astrophysical importance for variable molecular desorption energy ED for icy surfaces with polar (H2O) and non-polar (CO) composition was noted early by Tielens & Hagen (1982); Sandford & Allamandola (1988); Leger (1983) and Bergin et al. (1995). The latter included this e\ufb00ect in an astrochemical model, albeit not in a self-consistent manner (Bergin & Langer 1997). Compared to non-polar ices, a surface covered with H2O allows binding via dipole-dipole and dipole-induced dipole interactions, as well as the strong hydrogen bonds. Besides desorption, such bonding also a\ufb00ects mobility of surface species, and thus, their reactivity. Further exploration of the idea of variable ED has been limited (He et al. 2016; Garrod et al. 2022). An aspect of the variable-ED approach is di\ufb00erent ED on bare grains and ice-covered surfaces. The binding energies to materials similar to interstellar grains are known for a limited number of species (e.g. Vidali et al. 1991). Dual (bare grain and ice) ED have been used in by Chang et al. (2007); Taquet et al. (2014) and Hocuk & Cazaux (2015). Unlike multilayer and multigrain models, variable-ED e\ufb00ects are much less understood with no dedicated studies. The aim of this study is to combine the above phenomena within a single model that produces reasonable results, namely, ice composition and deduce if variable ED has astrochemically signi\ufb01cant e\ufb00ects. The necessary tasks include \u2013 developing an integrated multigrain-multilayer array system for chemical species, grain, and ice parameters; \u2013 adapting or creating descriptions the microscopic processes, notably, variable ED for such a model; the descriptions should allow simple inclusion in other astrochemical codes; \u2013 investigating the signi\ufb01cance of variable ED in modelling results; \u2013 exploring e\ufb00ects that arise from phenomena that have not yet been combined together in astrochemical models. Reproducing the proportions of major species observed in interstellar ices has been possible with simpler codes (Ru\ufb04e & Herbst 2001; Garrod & Pauly 2011), which may have deterred the advancement, or need, for more complex models. The latter two tasks involve limited parameter space analysis and will allow to understand the basic gas-grain physico-chemicalinterplay in dense cloud cores with the multigrain-multilayer code. This is essential before further exploration of ice chemistry can be made with the new code. 2. Methodology The model was developed on the basis of the modi\ufb01ed rateequation code with multilayer ice chemistry Alchemic-Venta from of Kalv\u00af ans (2021), which is the default reference. Some multigrain aspects have been tested in Kalv\u00af ans & Silsbee (2022). The chemical model is set in a gas parcel in a low-mass contracting prestellar dark molecular core. Below, we describe the complete model with all functionalities enabled, referenced to as Model full in the Results section 3. 2.1. Chemical model Table 1 lists the initial chemical abundances, used at the start of the simulation. The cosmic-ray ionization rate \u03b6 was calculated following Padovani et al. (2009), with model \u201cHigh\u201d spectra from Ivlev et al. (2015) and depends on hydrogen column density NH. The \u03b6 value obtained this way is rather high for the 10\u221217...10\u221216 values typically applied in astrochemistry, hence Table 1. Initial chemical abundances relative to total hydrogen. Species X/H H2 0.50 He 0.090 C+ 1.4 \u00d7 10\u22124 N 7.5 \u00d7 10\u22125 O 3.2 \u00d7 10\u22124 F 6.7 \u00d7 10\u22129 Na+ 2.0 \u00d7 10\u22129 Mg+ 7.0 \u00d7 10\u22129 Si+ 8.0 \u00d7 10\u22129 P+ 3.0 \u00d7 10\u22129 S+ 8.0 \u00d7 10\u22128 Cl 4.0 \u00d7 10\u22129 Fe+ 3.0 \u00d7 10\u22129 we divide it by 4\u03c0 with the justi\ufb01cation that our typical lowmass cloud core is located far from the Galactic centre and is shielded by a parent giant molecular complex. In other words, it can be said that spatially one steradian of the core is exposed to full interstellar cosmic-ray intensity. The intensity of cosmic-ray induced photons depends on \u03b6 and was calculated with Equation (2) of Kalv\u00af ans & Kalnin (2019). The cloud is irradiated by normal interstellar radiation intensity with G0 of 1.7 \u00d7 108 s\u22121cm\u22122, attenuated the cloud\u2019s matter with NH/AV = 2.2 \u00d7 1021 cm\u22122 (Zuo et al. 2021). Gas temperature Tgas was calculated according to Equation (2) of Kalv\u00af ans (2021). Because this equation works only when interstellar extinction AV is below or similar to 40 mag, Tgas was coupled to dust temperature at higher extinctions. Neutral molecules adsorb on to grain surfaces, forming an ice layer. The sticking coe\ufb03cient was taken to be unity for heavy species and calculated according to Thi et al. (2010) for the light species H and H2. The size of a \u201ccubic average\u201d molecule was assumed to be 0.32 nm. When ice thickness b exceeds 1 ML, excess icy molecules are moved to bulk-ice and are sequentially ordered in three subsurface ice layers. All layers are chemically active. Icy species can be destroyed via photodissociation by interstellar and cosmic-ray induced UV photons at a rate that is equal to 0.3 times their gas-phase photodissociation rate (Kalv\u00af ans 2018; Terwisscha van Scheltinga et al. 2022). The surface di\ufb00usion energy Edi\ufb00was taken to be 0.50ED. Reactions with activation barriers proceed either by hopping across the barrier or via quantum tunneling, which is possible for H and H2 (Hasegawa & Herbst 1993a). Chemical reaction rate coe\ufb03cients were adjusted for reaction-di\ufb00usion competition (Garrod & Pauly 2011). Bulk-ice molecules react with other molecules in the same layer with an approach that assumes that they are frozen in place with a bulk-ice binding (absorption) energy equal to 3ED (Kalv\u00af ans 2015). Similar methods for bulkice chemistry have recently gained traction (Shingledecker et al. 2019; Jin & Garrod 2020). The model considers several desorption mechanisms, with the simplest being evaporation, which is most important for H2. Pantaleone et al. (2021) have presented credible evidence that the reaction heat of the common H+H reaction on grains may induce desorption of an adjacent hyper-volatile icy molecule. This indirect reactive desorption mechanism was included in our model with the help of Equation (16) of Kalv\u00af ans (2015) and an e\ufb03ciency parameter of \u01eb = 0.001 desorbed molecules per Article number, page 2 of 15 \fJuris Kalv\u00af ans et al.: A multigrain-multilayer astrochemical model with variable desorption energy for surface species Table 2. Additions to the reaction network. Gas-phase reactions k(10K)a , cm3 s\u22121 Ref.b CH + CH3OH \u2192CH3CHO + H 2.5E-10 1,2 C + H2CO \u2192CO + CH2 6.2E-10 3 CH3 + HCO \u2192CH3CHO 5.0E-11 3 Surface reactions EA, K Ref. CO + H \u2192HCO 2500 4c H2CO + H \u2192HCO + H2 415 5 CH3O + H \u2192H2CO + H2 0 5 CH2OH + H \u2192H2CO + H2 0 5 (a) Rate coe\ufb03cient at 10 K. (b) 1 \u2013 NIST (http://kinetics.nist.gov/kinetics/index.jsp), 2 \u2013 Johnson et al. (2000), 3 \u2013 Vasyunin et al. (2017), 4 \u2013 Garrod & Herbst (2006, OSU reactions network), 5 \u2013 Minissale et al. (2016b). (c) Reaction not added, only changed its EA. H+H reaction act (Duley & Williams 1993; Willacy et al. 1994, see also Takahashi & Williams 2000). For desorption via cosmic-ray induced whole-grain heating (Hasegawa & Herbst 1993a), a law that assumes similar heating frequencies for grains of di\ufb00erent sizes was used by Kalv\u00af ans & Silsbee (2022). Here we improve this law into an based on the exhaustive new data from Kalv\u00af ans & Kalnin (2022). Namely, the cosmic-ray induced heating frequency fCRD is now proportional to the inverse square root of grain radius a, in addition to its dependence to NH, fCRD(54K) = 1.93 \u00d7 10\u221211 \u221a0.05/a A1.35 V 4\u03c0 , (1) where a is expressed in \u00b5m. The cooling time for the grains was taken to be similar to characteristic sublimation time of CO (Hasegawa & Herbst 1993a), which is 0.002 s for a temperature of 54 K in our model. Like \u03b6, fCRD was also divided by 4\u03c0. Photodesorption and desorption of chemical reaction products are described separately in Sections 2.6 and 2.7. The chemistry in ice layers is explicitly considered, which means that the model calculates molecular abundances for each ice layer on each grain type. The actual reaction network includes multiple similar lists of surface molecular processes for each grain type and ice layer. 2.2. Reactions network We employ UDfA Rate12 chemical network (McElroy et al. 2013) for the gas phase and a reduced OSU database for surface reactions (Garrod et al. 2008). Following Vasyunin et al. (2017), we added gas-phase COM reactions to balance the alcoholaldehyde chemistry (Table 2). The variable-ED approach results in generally lower molecular binding energies and more rapid di\ufb00usion of surface species, which may overproduce CO hydrogenation products. To address this, higher, original OSU database activation energy barrier EA of 2500 K was returned for the CO + H surface reaction. Additionally, three \u201cunproductive\u201d, H2-producing hydrogenation reactions of intermediate CO hydrogenation products were added, all with branching probabilities of 0.5, as suggested by Minissale et al. (2016b). The latter reactions supplement similar additions to Alchemic-Venta in Tables 4 and 6 of Kalv\u00af ans (2015). Table 2 summarizes these changes to the network. To reduce the overall number of species and reactions, phosphorus compounds with two or more C atoms were removed. Table 3. Dust grain radius a and number density relative to H atoms.. a \u00b5m ng/nH 0.037 5.46E-12 0.058 1.73E-12 0.092 5.46E-13 0.146 1.73E-13 0.232 5.46E-14 These species are irrelevant to the overall chemistry, because of the low abundance of P, and their low abundance, even relative to simpler P species. Network reduction ensured a smooth operation of the code and reduced computing time at a little cost to the scienti\ufb01c output, since we do not study the chemistry of phosphorus. 2.3. Grain physics We divide the MRN grain size distribution in \ufb01ve bins with logarithmic spacing. It is a compromise, which allows to model multigrain surface chemistry in signi\ufb01cant detail, while not making the model overly complex. The number of grain size bins has a limited e\ufb00ect on modelling results (Iqbal & Wakelam 2018), while \ufb01ve bins have been used also by other authors (Acharyya et al. 2011; Pauly & Garrod 2016; Sipil\u00e4 et al. 2020). Table 3 shows the assumed grain sizes and abundances. Moreover, here we assume grains that have already undergone processing in a star-forming region, i.e., the grains have a carbonaceous coating, not unlike interplanetary dust particles (Flynn 2020). While such a choice may be physically justi\ufb01ed, in our model it has the bene\ufb01t of bare surface being signi\ufb01cantly di\ufb00erent from water-dominated ices. This means that the e\ufb00ects of an ED that di\ufb00ers for bare and ice-covered grains can be more pronounced (Section 2.5). A second consequence of the processed-grain assumption is that the smallest grains must have have been depleted by sticking to larger grains (Silsbee et al. 2020); thus we adopt the sizes and relative abundances of grains from Sipil\u00e4 et al. (2020). Importantly, exclusion of the small grains reduces the average temperature and reactivity of surface species, which results, for example, in lower abundances of CO2 ice (see also Iqbal & Wakelam 2018). Therefore, grain size distribution is another aspect that regulates the calculated composition of ices, in addition to the existence of active or passive bulk ice, the Edi\ufb00/ED ratio, reaction activation barriers, and selective desorption mechanisms. A bene\ufb01t for our model is that the exclusion of the small grains allows adequate operation for the modi\ufb01ed rate-equation procedure of the ALCHEMIC code (Semenov et al. 2010). For calculating the cosmic-ray-induced whole-grain heating rate (see above), Kalv\u00af ans & Kalnin (2022) considered refractory grains consisting of 40 % amorphous carbon and 60 % silicates by mass. The resulting grain density was 2.6 g cm\u22123. The grain mass obtained with this density constitutes 0.4 % of the gas mass, in contrast to 0.5 % for distributions that include smaller grains, such as Acharyya et al. (2011). We did not consider loss of grain mass; instead the small grains are stuck on to the large grains, in e\ufb00ect, increasing their abundance. To account for the mass gap, the Sipil\u00e4 et al. (2020) grain abundances were multiplied by a factor of 1.25. An additional 0.8 % of cloud mass are in the elements heavier than He (\u201cmetals\u201d) that constitute icy mantles in freeze-out conditions (Table 1). Grains grow as the icy mantles accumulate and increase thickness. The temperature of the dust grains Td was calculated with the method given Article number, page 3 of 15 \fA&A proofs: manuscript no. Edese by Hocuk et al. (2017, for a = 0.1 \u00b5m grains), and attributed to di\ufb00erent grain sizes following Pauly & Garrod (2016), i.e., Td \u221da1/6. 2.4. Variable desorption energy Over the years, ED measurements and calculations have been done for various species relevant to interstellar ices. Our task here is to devise a relation that describes how ED changes for a molecule embedded in an environment rich in polar species (ED,pol), such as H2O, relative to the same molecule in non-polar ices, such as CO (ED,np). Thus, we are interested in measurements that directly correlate molecular ED in polar an non-polar environments. Such measurements are possible only for volatile molecules, which evaporate \ufb01rst. Probably the most relevant molecule is CO itself, which is su\ufb03ciently abundant to make non-polar parts of interstellar ices (Sandford & Allamandola 1988; Tielens et al. 1991). While it is clear that the actual ED includes a range of values, depending on the properties of individual adsorption sites and the orientation of the molecule (He et al. 2018; Grassi et al. 2020), here we employ single-value ED, which is a standard practice in astrochemistry. Desorption energy for CO in watery ices has been considered to be 1150 K (Collings et al. 2004; Noble et al. 2012; Penteado et al. 2017) or 1300 K (Wakelam et al. 2017; Das et al. 2018). In a pure-ice CO matrix, ED,np,CO of CO has been measured to be 954 K by Shinoda (1969), while more recent measurements give values of 855, 858, 866, and 899 K (\u00d6berg et al. 2005; Acharyya et al. 2007; Fayolle et al. 2016; Mart\u00edn-Dom\u00e9nech et al. 2014, respectively). Another molecule for which data are available that help evaluating the ratio ED,np/ED,pol \u2261np/pol is molecular nitrogen. In a H2O matrix, ED,pol,N2 has been determined to be in the range of 810\u2013 1400 K (Wakelam et al. 2017; Das et al. 2018; Penteado et al. 2017). For non-polar environments, ED,np,N2, measurements have been made for N2 matrices, obtaining values of 779 and 790 K (Fayolle et al. 2016; \u00d6berg et al. 2005, respectively). These values yield a wide-ranged np/pol ratio of 0.56...0.98. The procedure in our model for calculating ED for species in ices is as follows. The original, or default ED values are ED,pol from the OSU surface network, replaced by those of Wakelam et al. (2017), where possible. An exception was made for the volatile CO, N2, and CH4 molecules, whose ED,pol were adopted from Penteado et al. (2017). The values of ED,pol correspond to a matrix (surface) consisting of pure water with ED,pol,H2O = 5600 K. First, the model obtains the weighed average \u00af ED,pol for the whole ice phase in consideration (one of four layers on one of \ufb01ve types of grains), taking into account all icy species in that phase. Then we calculate np/pol with np/pol = \u00af ED,pol + (Xdes \u22121)ED,pol,H2O XdesED,pol,H2O , (2) and the \ufb01nal desorption energy for species A, used for calculating desorption and surface di\ufb00usion rates is ED,A ED,A = ED,pol,A \u00d7 [np/pol] . (3) This approach adjusts species\u2019 ED in accordance with the environment it resides in and depends only on a single external parameter Xdes = 4.00. This value corresponds to np/pol \u22480.8. For example, ED,CO decreases from 1150K in a H2O matrix environment to 922 K in a pure CO matrix. In modelled multilayer ices, where Equation (3) operates, the extreme values are never reached and the np/pol lies in between 0.8 and 1. Equation (3) Table 4. Derivation of the H-bond rule: selected molecular ED on carbonaceous and icy surfaces.a Molecule ED,pol, K ED,bare, K Ref.b Di\ufb00erence pol-bare, K H 650 658 1 -8 H2 440 542 1 -102 O 1400 1500 2 -100 OH 3500 1360 1 2140 H2O 5640 2000 1 3640 O2 1000 1440 1 -440 O2H 4300 2160 1 2140 H2O2 4950 2240 1,3 2710 CO 1300 1100 4 200 CH3OH 3100 1100 4 2000 (a) The values given in were not used in the model. They illustrate the reasoning for estimating the pol-bare values provided in Table 5. (b) 1 \u2013 Cuppen & Herbst (2007), 2 \u2013 Minissale et al. (2016a), 3 \u2013 Cazaux et al. (2010), 4 \u2013 Hocuk & Cazaux (2015). Table 5. The H-bond rule: assumed ED,pol \u2212ED,bare for surface species in the model, or the additional binding energy introduced by H-bonds of molecules on ices, compared to molecules on bare carbonaceous grains. Molecules or functional groups ED,pol \u2212ED,bare, K HF; NH3; H2O and H2O2 3000 HCl; -OH 2000 -NH; HCN and related 1500 other 0 was not applied in cases when a molecule\u2019s ED,pol,A was already lower than the average ED,pol in the ice layer. To avoid discontinuities in modelled abundances, variable ED was applied proportionally to ice thickness, from 0 MLs with no e\ufb00ect, when icy molecules barely interact, to full e\ufb00ect at 2 MLs and above, when most icy molecules primarily interact with neighbouring adsorbed species, instead of the refractory grain surface. Summarizing, Equations (2) and (3) present a simple approach for calculating ED for a molecule in ice with changing composition and, thus, changing average desorption energy of species in this icy phase. In practice, np/pol is determined by a few species that dominate a given ice phase at a given time step. Most often these are H2O, CO, CO2, N2, and CH3OH. Variable ice ED, calculated from the abundance of H2, has been employed also by Garrod et al. (2022). They do not apparently base their approach on experimental or theoretical data. Within our model, H2 is among the species considered in \u00af ED,pol and the abundance of surface H2 itself is regulated with the help of encounter desorption (Hincelin et al. 2015). 2.5. Desorption energy on bare grains As stated in Section 2.3, our bare grains have a carbonaceous coating. A few other studies have struggled to obtain an assortment of ED,bare for molecules on carbon because only limited experimental data are available. Cuppen & Herbst (2007) developed such a list for water chemistry, which has been updated by Cazaux et al. (2010). Similar lists have been compiled by Hocuk & Cazaux (2015) and Minissale et al. (2016a); however the latter studies less rigidly stick to carbon and have included ED values from experiments with silicate or other materials, instead of using estimates for carbon. Some recent experiArticle number, page 4 of 15 \fJuris Kalv\u00af ans et al.: A multigrain-multilayer astrochemical model with variable desorption energy for surface species mental data have been published using highly oriented pyrolytic graphite (HOPG), which also show lower bare surface ED for isolated H-bond forming molecules (Minissale & Dulieu 2014; Doronin et al. 2015; Chaabouni et al. 2018). The above-mentioned authors employed small reaction networks, which means that our task here is to derive a simple approach with which the di\ufb00erences in ED,pol on ices and ED,bare on carbonaceous grains can be attributed to a variety of species. Table 4 summarizes such data from studies that have ED,pol and ED,bare based on similar considerations. A striking feature, apparent in the compiled data, is the low ED,bare values for species that are able to form hydrogen bonds. Another issue is variation of desorption energies for volatile species from H to O2. No clear pattern can be seen in the latter case. Table 4 allows us to derive the general approach on devising ED,bare \u2013 subtracting hydrogen bond energy from ED,pol. It also gives an indication on the values that need to be subtracted, which are about \u22652000 K for molecules containing O-H bonds and \u22483000 K for molecules containing two O-H bonds. Naturally, these are lower than often used hydrogen bond energy of about 2800 K because, in absence of H-bonds, other types of molecule-surface bonding are formed instead. For consistency, we extrapolate the lack of hydrogen bonds on bare grains, or \u201cthe H-bond rule\u201d for other molecules containing electronegative atoms associated with H capable of hydrogen bonding, such as nitrogen or halogens. Such a need is underlined by the study of Kakkenpara Suresh et al. (2023), who emphasize the importance of H-bonds for ammonia in circumstellar ices. The qualitative task is now to choose the types of H-bonds that can make a di\ufb00erence in the grain-ice interface. The quantitative task is to evaluate the di\ufb00erences ED,pol \u2212ED,bare (or pol-bare for short) for the chosen types of H-bonds. One has to keep in mind that the carbonaceous grain surface likely is quite irregular, seeded with heteroatoms with undivided electron pairs and even an occasional H atom attached to electronegative atoms, such as C in sp hybridization. In other words, the carbonaceous surface itself sports some components for weak hydrogen bond formation. From this aspect we infer that only ice molecules that can serve as both electron and proton donors and are able to form the strongest H-bonds can make a di\ufb00erence in the transition from bare surface to icy mantles. The energy of the OH\u00b7 \u00b7 \u00b7O bond in water has been studied most extensively (e.g. Tomoda & Kimura 1983; Ahirwar et al. 2022) with dimer dissociation energies usually in the range of 2300 to 3300 K (Walrafen 2004; Kikuta et al. 2008; Spanu et al. 2008; Sterpone et al. 2008). Such results allow to estimate the e\ufb00ective binding energy di\ufb00erence caused by H-bonding polbare with an accuracy of about 500 to 1000 K. Some organic molecules, such as alcohols and carboxylic acids are also capable of forming hydrogen bonds via the oxygen atom. Their dissociation energy is in the same range (Andersen et al. 2015). Taking into account the data from Table 4, it seems reasonable to assume pol \u2212bare = 2000 K for OH\u00b7 \u00b7 \u00b7O bonds. It is encouraging that a 2000 K H-bond energy value also is close to the ED di\ufb00erence for CH2OH and CH3O as obtained by Garrod et al. (2008) via completely di\ufb00erent considerations. Cuppen & Herbst (2007) and Cazaux et al. (2010) ED compilations indicate that the ability of a molecule to form double H-bonds for H2O and H2O2 does not translate into an polbare twice as high. Here we take this factor to be 1.5, i.e. polbare=3000K for water and hydrogen peroxide. In the case of ammonia, NH\u00b7 \u00b7 \u00b7 N hydrogen bonds are only about half as strong, however, OH\u00b7 \u00b7 \u00b7 N bond energy is similar or even higher than that of OH\u00b7 \u00b7 \u00b7 O, with dissociation energies of about 3800 K (Yeo & Ford 1994; Kikuta et al. 2008; Ahirwar et al. 2021). Their strength decreases if functional groups are attached to the nitrogen atom (Boryskina et al. 2007; Vallet & Masella 2015). The case of ammonia is further complicated by its protonation in water, not explicitly considered here. As a \ufb01rst approximation, we assume that pol-bare=3000K for ammonia and 1500 K for other compounds containing the N-H bond. For hydrogen halogenides HF and HCl, dissociation energies for H-bonds with water have been deduced to be about 4300 K and 2700 K by Alkorta & Legon (2023). For hydrogen cyanide, the same authors provide an average value of 2400 K. In line of the above considerations, we assumed pol-bare 3000; 2000, and 1500K for HF, HCl, and HCN, respectively. The latter value was attributed also to the related HNC, HNO and a few other similar molecules. The pol-bare values assumed above are only educated guesses. However, they provide a systemic approach for bare grain ED, which leaves room for improvements with new data. Table 5 summarizes the hydrogen bond rule applied in this study. The values provided in the table are exact only for completely bare grains, and their e\ufb00ect is reduced proportionally to ice coverage on grains, until the H-bond rule disappears completely, when formal ice thickness reaches 2 MLs, and mobility or evaporation can no longer be a\ufb00ected by molecular interactions with the surface of the refractory grains. This means, for example, that a 1 ML coverage, the pol-bare values are only half of their values given in Table 5. Therefore, during the accumulation of the \ufb01rst two ice MLs on a grain with a given size, the H-bond rule for bare grains is gradually replaced by the variable ED approach designed for icy environments in Section 2.4. Two MLs as the \ufb01nal threshold for conversion from bare grain e\ufb00ects to the variable ED in ices was chosen because at 1 ML, all molecules are still a\ufb00ected by their attachment to the bare grain surface, and the e\ufb00ects of the latter cannot be discarded yet. Moreover, the higher 2 ML threshold allows to account for some clustering of molecules on bare grain surface (as indicated by the model of Garrod 2013b), where di\ufb00usion, reactions and desorption on patches of bare surface are still possible, even when nominal ice thickness exceeds 1 ML. 2.6. Photodesorption Photodesorption yield Ypd, molecules per UV photon, has been now measured in a number of experiments. Molecular dynamics simulations reveal that it involves photon absorption at di\ufb00erent ice layer depths, direct desorption, photodissociation, trapping or recombination of its products with possible desorption, and \u201ckicking out\u201d neighbours by excited molecules (Andersson et al. 2006; Arasa et al. 2015; van Hemert et al. 2015). The resulting desorption of intact molecules or their fragments depends on surface type (volatile or non-volatile ices, or bare grain, Bertin et al. 2012; Potapov et al. 2019), composition of icy mixtures (Bertin et al. 2016; Carrascosa et al. 2019), possibility for codesorption (Bertin et al. 2013), as well as spectrum of the incident radiation (Fayolle et al. 2011). Experiments show that Ypd depends on ice temperature (Mu\u00f1oz Caro et al. 2010, 2016), deposition angle (Gonz\u00e1lez D\u00edaz et al. 2019), and ice thickness (\u00d6berg et al. 2009b; Sie et al. 2022). Possible presence of atmospheric gases has to be addressed, while a number of experiments irradiate their astrophysical ice analogues by photons with energies below 10\u201311eV. This can be a de\ufb01ciency because important absorption bands may lie at higher energies (Chen et al. 2014; Mart\u00edn-Dom\u00e9nech et al. 2015; Paardekooper et al. 2016). Article number, page 5 of 15 \fA&A proofs: manuscript no. Edese 1E-5 1E-4 1E-3 1E-2 0 2000 4000 6000 Ypd, molecules photon-1 ED, K H2CO (8) CH3CN (9) CH3OH (10) CO (1-4) NO (6) CH4 (5) CO2 (7) H2O (11) Fig. 1. ED-dependent photodesorption yield of icy molecules. The line follows Equation 4, while dots indicate experimental data with references in parentheses: 1 \u2013 Bertin et al. (2013), 2 \u2013 Fayolle et al. (2011), 3 \u2013 Fayolle et al. (2013), 4 \u2013 Mu\u00f1oz Caro et al. (2016), 5 \u2013 Dupuy et al. (2017a), 6 \u2013 Dupuy et al. (2017b), 7 \u2013 Fillion et al. (2014), 8 \u2013 F\u00e9raud et al. (2019), 9 \u2013 Basalg\u00e8te et al. (2021), 10 \u2013 Bertin et al. (2016), 11 \u2013 Fillion et al. (2022). The missing wavelengths may induce more e\ufb03cient desorption as in the case of N2, or increase the proportion of dissociative desorption, thus reducing the e\ufb00ective Ypd for intact molecules at full \u03bb range. Several issues also become important, when experimentally obtained Ypd are applied in astrochemical models. First, two types of UV radiation are present \u2013 interstellar and cosmic-ray induced photons. Some studies di\ufb00erentiate between the two (Fayolle et al. 2011); the di\ufb00erence is usually within a factor of 2. Second, dissociated fragments may recombine or react with other surface species and undergo chemical desorption (Section 2.7), enhancing the e\ufb00ective yield. This e\ufb00ect is more pronounced for complex molecules that are more easily dissociated. Considering the above, we compiled set of reliable Ypd data, shown in Figure 1. We used Ypd values for the interstellar radiation \ufb01eld, whenever possible because interstellar photons determine the formation epoch for ices. Moreover, we opted for sources that consider the full 7\u201313.6 eV range of photons in molecular clouds. Only values for desorption of intact molecules were used because any dissociation fragments have a considerable probability of chemical desorption. Based on the experience from Kalv\u00af ans (2015), temperature and spectral in\ufb02uence were considered to be of minor importance and were ignored. Photodesorption from subsurface layers was addressed by allowing photodesorption for molecule depth from up to 4 ice MLs (Andersson & van Dishoeck 2008). Carbon monoxide CO photodesorption is, perhaps, the most studied and we had the luxury for obtaining and using an average desorption yield Ypd \u224810\u22122 value from several experimental reports. A number of papers have also studied photodesorption of water H2O (e.g., \u00d6berg et al. 2009a; Cruz-Diaz et al. 2018; Bulak et al. 2023), and carbon dioxide CO2, while only Fillion et al. (2014, 2022) considered desorption by photons above 11 eV. Ammonia NH3 was not included because no ice Table 6. Chemical desorption e\ufb03ciency as calculated in the model for watery ices and bare grains, compared to experimental results. fCD model fCD experiment Reaction ice bare ice bare Ref.a N + N \u2192N2 0.44 0.88 0.5 0.7 1 O + O \u2192O2 0.36 0.72 ... 0.79 2 O + H \u2192OH 0.20 0.60 0.25 0.5 1 OH + H \u2192H2O 0.11 0.50 0.3 0.5 1 O2 + H \u2192O2H 0.00 0.05 ... 0.1 3 CO + H \u2192HCO 0.06 0.13 ... 0.1 4 HCO + H \u2192CO + H2 0.18 0.36 ... 0.4 4 H2CO + H \u2192HCO + H2 0.00 0.00 ... 0.1 1 S + H \u2192HS 0.17 ... \u22640.6 ... 5 HS + H \u2192H2S 0.10 ... \u22640.6 ... 5 (a) 1 \u2013 Minissale et al. (2016a), 2 \u2013 Minissale & Dulieu (2014), 3 \u2013 Dulieu et al. (2013), 4 \u2013 Minissale et al. (2016b), 5 \u2013 Oba et al. (2018). desorption data were found for photons with energies exceeding 10.9 eV (Mart\u00edn-Dom\u00e9nech et al. 2018). Molecular oxygen O2 largely desorbs via dissociation, and its yield for intact molecules is uncertain (Fayolle et al. 2013). Importantly, usable data are available for some complex organic molecules (COMs). Their intact molecule Ypd is low and chemical desorption of dissociated fragments is the main ejection pathway (Cruz-Diaz et al. 2016; Bulak et al. 2020). An empirical relation that connects the selected measurements is Ypd = 7076E\u22121.906 D \u03b7 , (4) where \u03b7 is unity for simple molecules with number of atoms Nat < 5 and \u03b7 = 10Nat\u22124.2 for complex molecules with Nat of 5 or more atoms. This equation is illustrated with Figure 1. An exception, where Equation 4 was not applied, was created for diatomic monoelemental molecules H2, N2 and O2, which have low yields from pure ices and more e\ufb03ciently are removed via codesorption (Fayolle et al. 2013). A \ufb01xed value Ypd = 0.0055 was applied for these molecules (Bertin et al. 2013). We note that because the molecular ED varies, photodesorption yields typically are lower by about a factor of 0.8 for hyper-volatile molecules and higher for less-volatile ices. In the important and extreme case of water on bare grains (ED,H2O,bare = 2600 K), its Ypd,H2O,bare = 0.002. Such an signi\ufb01cantly elevated yield is nonetheless in agreement with experimental data that indicate total (intact and dissociative) Ypd,H2O,bare of up to 0.5 (Potapov et al. 2019). 2.7. Chemical desorption Highly-e\ufb03cient chemical desorption of exothermic surface reaction products has been explored experimentally during the last decade (Chaabouni et al. 2012, see also Cazaux et al. 2010). Desorption probability of this process can be up to 90 % (for the OH + H reaction on a silicate surface) and has been quanti\ufb01ed and parameterized in further experiments and theoretical works (Dulieu et al. 2013; Minissale & Dulieu 2014; Minissale et al. 2016a; Fredon et al. 2017; Oba et al. 2018; Pantaleone et al. 2020; Molpeceres et al. 2023). This mechanism is especially important for H2O, which forms via two-step hydrogenation on grain surfaces. Thanks to hydrogenation-dehydrogenation cycles, chemical desorption is relevant also for CO (Minissale et al. 2016b). Article number, page 6 of 15 \fJuris Kalv\u00af ans et al.: A multigrain-multilayer astrochemical model with variable desorption energy for surface species 0.1 1 10 100 0.0 1.0 AV, mag t, Myr AV 1E+3 1E+4 1E+5 1E+6 1E+7 0.0 1.0 nH, cm-3 t, Myr nH 5 10 15 20 0.0 1.0 K t, Myr Td 0 0.1 0.2 0.3 0.0 1.0 a+b, \u00b5m t, Myr grain radius Tgas Fig. 2. Density, extinction, temperature, and grain size evolution in the model. For the latter two graphs, thicker curves are for larger grains. Because chemical desorption is especially e\ufb00ective for water-forming reactions on bare grains, it signi\ufb01cantly a\ufb00ects the onset of ice layer formation. This aspect cannot be ignored in chemical modelling focusing on interstellar ices. Thus, we replace the reactive desorption of Garrod et al. (2006), previously applied in the Alchemic-Venta model, with the chemical desorption method by Minissale et al. (2016a). The variable ED approach naturally produces di\ufb00erent chemical desorption results for di\ufb00erent types of surfaces \u2013 bare grains, polar ices and nonpolar. To account for the less-e\ufb00ective desorption from ices observed in experiments, the chemical desorption e\ufb03ciency (fraction of desorbed molecules) fCD was modi\ufb01ed by a factor of 0.5 for reactions on grains with ice thickness exceeding 1 ML. Table 6 compares experimental fCD values to those used in our model. The application of chemical desorption in concert with our variable-ED approach is what allows for a realistic and and chemically e\ufb00ective representation of this mechanism, as illustrated by the data in Table 6. 2.8. Collapsing prestellar core macrophysical model We considered a single point located at the centre of a spherical interstellar molecular core. The cloud model is relatively simple and follows the approach of previous studies. It consists of two main parts. First, the central density n0 of the core is calculated according to a free-fall collapse scenario (Brown et al. 1988). Hydrodynamical simulations (e.g., Pavlyuchenkov & Zhilkin 2013; Pavlyuchenkov et al. 2015) indicate that actual core collapse can be a few times longer than the free-fall time; thus we delayed the contraction rate by a factor of 0.5. The initial conditions were nH = 2000 cm\u22123 and NH = 1.1 \u00d7 1021 cm\u22122, corresponding to initial interstellar extinction AV = 0.5. Second, for each integration step, the spherical (1D) density distribution in the core was obtained with Equation (1) of Kalv\u00af ans (2021). This time, core mass was maintained at 2 M\u2299. Core collapse lasts for another 1.55 Myr until a \ufb01nal density of 1 \u00d7 107 cm\u22123 is reached. At this point, freeze-out is e\ufb00ectively over and ice composition does not change any more. Figure 2 shows the evolution of physical conditions at the centre of the core. In the \ufb01rst Results subsection 3.1 we explore variable ED effects in a pseudo time-dependent Model const of a stable, dark molecular core, i.e., with the collapsing core feature switched o\ufb00. For chemical relaxation, both models were preceded by a 7 8 9 10 11 0.0 0.5 1.0 T, K t, Myr Td 0 0.1 0.2 0.3 0.0 0.5 1.0 a+b, \u00b5m t, Myr grain radius Fig. 3. Grain size and temperature in the pseudo time-dependent cold core Model const. Thicker curves are for larger grains, and a is refractory grain radius (constant), while ice mantle thickness b varies with time. 1 Myr long di\ufb00use cloud period, with hydrogen numerical density nH = 2000 cm\u22123 and interstellar extinction AV = 0.5 mag. 3. Results Section 2 includes a number of assumptions about desorption energy on interstellar grain surfaces. These assumptions may be closer or farther from reality; however, it is clear that molecule di\ufb00usion, evaporation, photodesorption, chemical desorption, and desorption by the H+H surface reaction all depend on ED to some extent. In turn, ED is subject to change in di\ufb00erent surroundings. Our aim is to clarify if this latter dependence is astrochemically signi\ufb01cant and deduce its overall character. To do this, we primarily explore results with a model with a complete set of simulated processes, as described in Section 2 (Model full). For illustrating the signi\ufb01cance of one or more processes, limited functionality models were used. Table 7 shows that four functionalities of Model full were switched o\ufb00or reduced to rudimentary values: the cloud\u2019s macrophysical evolution, the variable-ED approach, chemical desorption and photodesorption. For context with other models considering multigrain or bulk-ice chemistry, our Table 7 can be compared with Tables 1 and 3 of Pauly & Garrod (2016) and Tables 1 and 7 of Garrod et al. (2022). We start the description of results with a pseudo-time dependent dense core Models const and const_noEd (Section 3.1), continue by describing the results of Model full and comparing them to Model noEd, where the ED variability is disabled (Section 3.2) and conclude by discussing the importance of photoand chemical desorption (Section 3.3). 3.1. Cold core model As an initial test case, we present the basic chemical results for a dense, cold, dark core with constant \u201cclassical\u201d physical conditions of nH = 2 \u00d7 104 cm\u22123 and AV = 10 mag. This Model const was run for an integration time t = 1.0 Myr. During the \ufb01rst few hundred kyr, grain growth occurs up to an ice thickness of 78 MLs on the smallest grains and 66 MLs on the largest grains. Figure 3 shows the change in grain sizes along with the accompanying changes in their temperatures. Gas temperature is constant at Tgas = 8.9 K, while the cosmic-ray ionization rate \u03b6 = 3.2 \u00d7 10\u221217 (Section 2.1). Article number, page 7 of 15 \fA&A proofs: manuscript no. Edese Table 7. Model functionalities and their basic ice chemistry results. Functionality Final ice abundance n/nH2 Model prestellar variable t of \ufb01rst core ED fcd Ypd H2O CO CO2 CH3OH NH3 ice ML, kyr const + f(ED) f(ED) 1.44E-4 3.17E-5 5.32E-5 1.89E-5 1.88E-5 46 const_noEd f(ED) f(ED) 1.43E-4 3.10E-5 5.37E-5 1.85E-5 2.00E-5 5.2 full + + f(ED) f(ED) 1.28E-4 8.08E-5 4.37E-5 1.26E-5 3.41E-6 1233 noEd + f(ED) f(ED) 1.34E-4 9.13E-5 3.04E-5 1.45E-5 3.57E-6 1176 noPD + + f(ED) 0.001 1.24E-4 7.52E-5 4.89E-5 1.29E-5 3.40E-6 1192 noCD + + 0.03 f(ED) 1.03E-4 7.54E-5 4.56E-5 1.47E-5 7.64E-6 1034 1E-7 1E-6 1E-5 1E-4 0 0.2 0.4 0.6 0.8 1 n/n(H2) t, Myr N N2 CO O a) b) 1E-9 1E-8 1E-7 1E-6 0 0.2 0.4 0.6 0.8 1 n/n(H2) t, Myr CH4 NH3 H2O CH3OH CO2 H2O2 b) Fig. 4. Calculated gas-phase chemical abundances in the cold core model. Panel (a): relative abundances of major gas-phase species. Panel (b): gas-phase abundances for important molecules that mostly originate from grain surfaces. In both panels, solid lines are for Model const with variable ED and dotted lines are for Model const_noEd with an unchanging ED. Figure 4 shows that the abundances of major gaseous species di\ufb00er little between Models const and const_noEd with and without the variable-ED approach, respectively. The easier desorption facilitated by variable ED is visible in slightly higher gasphase abundances. The changes are most pronounced to species that are formed via multiple steps on grain surfaces, such as methanol CH3OH and hydrogen peroxide H2O2. Nevertheless, the two simulations produce gas abundances that agree within a factor of 2 for most species and time periods. Figure 5 shows calculated abundances, relative to water ice, for major icy species for cold core simulations with and without variable ED. In Model const, 90 % freeze-out is reached at 0.45 Myr, which is later probably by 0.1 Myr or more, compared to comparable multigrain models (Pauly & Garrod 2016; Sipil\u00e4 et al. 2020). Di\ufb00erences in freeze-out are primarily caused by the consideration or non-consideration of bulk ice, which isolates majority of icy species from most desorption mechanisms, and by the e\ufb03cient chemical desorption, which acts as delaying function. 99 % freeze-out is reached at 0.74 Myr. For Model const_Ed without di\ufb00ering ED on bare grains and non-polar ices, the 90 % freeze-out is earlier only by about 0.02 Myr. A characteristic feature is the low abundance of CO2 ice for most of the time. The ice ratio CO2:H2O\u224820...30% is often observed to be similar to that of CO:H2O (McClure et al. 2023) but here reaches only 8 % at 90 % freeze-out and 16 % at 99 % freeze-out conditions. However, ice CO2:H2O continues growing to 23 % at 1 Myr thanks to CO2 photoproduction via bulk-ice reactions at the expense of H2O and CO. The cold core model, with its high initial nH and AV, is more of a testbed calculation, not able to closely represent real-life scenarios, and underproduction of CO2 is a typical feature in such models (see Ru\ufb04e & Herbst 2001; Bredeh\u00f6ft 2020). The similarity between the abundances of major icy species in Models const and const_noEd indicates that chemistry under rapid freeze-out conditions is regulated mostly by the abundantly adsorbing surface reactants and little a\ufb00ected by ED variability. For a review of similar results, albeit without chemical desorption, we refer the reader to Pauly & Garrod (2016), who also employed a model that considers bulk ice, and include a breakdown of ice composition on \ufb01ve grain sizes in their discussion of modelling results. Because we considered larger grains with accordingly lower temperatures, their 5G_T8 models are most relevant. The main di\ufb00erence between our model and that of Pauly & Garrod (2016) is that we consider chemical processing of bulk ice, which slowly increases the abundance of CO2 ice at the expense of CO and H2O. Moreover, our model overproduces methanol CH3OH ice, while their model tends to overproduce methane CH4 ice, which apparently can be caused by di\ufb00erences in the reaction networks. 3.2. General results As a context for the discussion that follows, in Figure 6 we show the general chemical results \u2013 abundances of major species \u2013 for the prestellar core model. With regard to evolution of ices, three periods can be discerned. First is the translucent cloud, dominated by gaseous atomic species and CO, while ice thickness remains below 1 ML. This period lasts for \u22481.2 Myr, until nH exceeds 104 cm\u22123 and AV \u22481.8 mag. Second is the ice formation period, when core contraction becomes increasingly Article number, page 8 of 15 \fJuris Kalv\u00af ans et al.: A multigrain-multilayer astrochemical model with variable desorption energy for surface species 0 10 20 30 40 50 60 70 80 0 0.2 0.4 0.6 0.8 1 X:H2O %; MLs t, Myr CO CH3OH NH3 N2 CO2 ice a) 0.01 0.1 1 0 0.2 0.4 0.6 0.8 1 X:H2O %; MLs t, Myr CH4 H2CO HNCO H2O2 CH3CCH ice/100 b) Fig. 5. Calculated chemical abundances icy species in the cold core model. Panel (a): abundances, relative to those of H2O ice of major icy molecules. The black curve is the average ice thickness \u00af b on grains, expressed in MLs. Panel (b): selected other abundant ice molecules. For convenience, this time \u00af b is divided by 100. In both panels, solid lines are for Model const with variable ED and dotted lines are for Model const_noEd without the variable-ED approach. 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 0 0.5 1 1.5 n/n(H2) t, Myr b) gas & icy species H2O CO CO2 N2 NH3 0 10 20 30 40 50 60 70 80 1.2 1.3 1.4 1.5 X:H2O %; MLs t, Myr c) ice % CO CO2 N2 NH3 CH3OH ice 1E-8 1E-7 1E-6 1E-5 1E-4 1E-3 0 0.5 1 1.5 n/n(H2) t, Myr a) major gas species H H+ C C+ O N S+ OH CH O2 Fig. 6. Overall chemical results for the prestellar core Model full with variable ED and other features enabled. Panel (a): abundance of primary gaseous species relative to that of H2. Panel (b): species that are abundant in ices, solid lines are for gas-phase, while dashed lines for solid phase abundances, relative to H2. Panel (c): percentage, relative to H2O ice, for major icy molecules. The black curve is the average ice thickness \u00af b on grains. rapid and up to 99 % of metals accrete on to grains at t=1.5 Myr, nH = 3 \u00d7 105 cm\u22123 and AV = 14 mag. Third, during the remaining 25 kyr density increases thirtyfold (with a presumed further collapse towards the \ufb01rst core) with little change in the composition of ices. This period is of limited interest for this study. We continue by describing the translucent and ice formation periods in more detail. 3.2.1. The sub-ML regime in translucent cloud Adsorbed molecules with average ice thickness below 1 ML likely are present in di\ufb00use and translucent molecular envelopes and thus form a part of sightlines towards prestellar cores. In Model full, this is the period when the H-bond ED rule of bare grains is in e\ufb00ect (Section 2.5). The lack of hydrogen bonding for surface species does not lower their ED and Edi\ufb00to significantly promote evaporation or di\ufb00usion across the grains (Table 5). However, it is su\ufb03cient to elevate chemical desorption and photodesorption yields (Sections 2.6 and 2.7). Surface chemistry in the translucent cloud is regulated by an interplay between accretion, photodesorption and chemical desorption. Accreted molecules can be desorbed directly by photodesorption or dissociated into fragments. Chemical radicals created on the surface or accreted from the gas react and the product molecules can be desorbed via chemical desorption. Two groups of surface-origin species can be discerned, whose gas abundance is regulated by desorption. First, species like H2O, NH3, CH4, CO2, CH3OH reach high gas-phase abundances relative to H2 (n/nH2) in excess of 10\u22129, thanks to photodesorption and rather high surface abundances of > 10\u22128. Second, a variety of species have highly e\ufb03cient chemical desorption, which, in combination with surface photodissociation, reach gas-phase n/nH2 > 10\u221212. These include, for example, hydrogen peroxide H2O2, which is subject to a strong H-bond rule (Table 5) and Article number, page 9 of 15 \fA&A proofs: manuscript no. Edese 1E-14 1E-13 1E-12 1E-11 1E-10 1E-9 1E-8 1E-7 0.0 0.5 1.0 n/n(H2) t, Myr H2O CO2 NH3 H2O2 O2H a) 1E-14 1E-13 1E-12 1E-11 1E-10 1E-09 0.0 0.5 1.0 n/n(H2) t, Myr CH3OH CH3O NH2CN CH2NH H2CO c) 0.001 0.01 0.1 1 0.0 0.5 1.0 MLs t, Myr d) ice noEd full 1E-12 1E-11 1E-10 0.0 0.5 1.0 n/n(H2) t, Myr b) C2H2 C3H2 C4H2 C5H2 Fig. 7. Variable ED induced changes in the translucent cloud. Comparison between Models full (solid lines) and noEd (dotted lines) for selected species including those with gas-phase abundances most a\ufb00ected by ED changes via the H-bond rule: inorganic in panel (a), carbon chains in panel (b) and other organic in panel (c). Panel (d) shows the growth of ice mantles on the grains, with thinner lines indicating smaller grains. The changes are caused by lack of strong H-bonding on bare grains in Model full. whose abundance changes by about an order of magnitude between Models full and noEd. In absolute numbers, most of COMs have gas-phase n/nH2 below 10\u221213 during the translucent period. However, for methanol and a few related compounds, such as CH3O, abundances exceed 10\u221212 and thus the e\ufb00ect of the lack of H bonds on bare-grains could be observed. Figure 7 illustrates the above discussion and shows also formaldehyde H2CO, which also has e\ufb00ective chemical desorption. Its ED is una\ufb00ected by the H-bond rule, which means that it has similar translucent cloud abundances for Models full and noEd. The abundance of CH3OH and CH3O is higher by a factor of \u22485 in Model full relative to Model noEd; for H2O2 and O2H this factor is 6...10. These factors become lower as time goes by because with the accumulation of ice, the importance of the non-existence of H-bonds on bare grains decreases. Lack of H bonds on bare grains has a signi\ufb01cant e\ufb00ect on overall ice abundances in the sub-ML regime. The release of more H2O, CO2 and NH3 to the gas phase in the Model full results in lower by a factor of 2...3 overall adsorbed species\u2019 abundances in the variable ED model. Overall, the lack of H-bonds on bare grains delay the formation of the \ufb01rst ice monolayer by 40 kyr for the smallest and 67 kyr for the largest grains. This delay has an inverse e\ufb00ect on H2O and CO2, for which the gasphase abundances are lower by a factor of 2 because there are less of these molecules on the surface, available for photodesorption. In turn and to a similar extent, a lower gas-phase H2O abundance positively a\ufb00ects the abundance of carbon chains because H2O interferes with carbon-chain gas-phase production by reducing the abundance of their building blocks, such as CH, as illustrated with panel (b) of Figure 7. The increase of carbon chain abundance by about a factor of two may not be high but its signi\ufb01cance lies in that virtually all unsaturated chains are a\ufb00ected by it. 3.2.2. Ice formation epoch The ice formation period is when most of ice mass is being accreted onto grains and the ice acquires its initial composition. If the ice is not destroyed (e.g., by falling into the protostar), this composition can be further modi\ufb01ed by heating or photoprocessing. Ice formation is characterised by initial formation of a 1 ML thick H2O-CO2 layer at the end of the translucent cloud period. It is followed by further accumulation of H2O and CO2. When Td drops below 12 K and CO becomes immobile, CO2 surface synthesis stalls (Pauly & Garrod 2016) and CO ice accumulates more rapidly, eventually overtaking CO2 but not H2O. In Model full, a lower (typically by about 10 %, i.e., np/pol\u22480.9) varying ED has two counteracting e\ufb00ects on the abundance of major icy species, compared to Model noEd with unchanging ED. First, a lower di\ufb00usion energy for surface CO allows it to remain mobile for longer in Model full and thus produce more CO2 in reactions with O and OH. This e\ufb00ect becomes visible when Td drops below 14 K. Second, lower ED,CO allows for a more e\ufb03cient desorption of CO, retaining it longer in the gas-phase, which means that it accretes later at lower Td, and produces less CO2. The balance of these two e\ufb00ects depends on the evolution of the modelled cloud and and the choice of model parameters, such as e\ufb03ciency of various desorption mechanisms, grain size distribution, Td of grains of di\ufb00erent sizes, and, in our model, also the variable ED approach. Table 7 shows that Model full has a CO2 ice abundance higher by a Article number, page 10 of 15 \fJuris Kalv\u00af ans et al.: A multigrain-multilayer astrochemical model with variable desorption energy for surface species 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr H2O 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CO 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CO2 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CH3OH 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr N2 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr NH3 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CH4 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr H2O2 Fig. 8. Variable-ED induced changes during the ice formation epoch: calculated n/nH2 for species ending up with high ice abundance. Solid and dashed lines are for Model full gas-phase and solid species, respectively. Dotted and dash-dotted lines are for gas and solids in Model noEd, respectively. factor of 1.4 than Model noEd, i.e., CO2 production at lower grain temperatures has been more signi\ufb01cant. The overall e\ufb00ect of the addition of varying ED in the model during the freeze-out stage is higher gas-phase abundances for most species, typically elevated by a factor of 1.5...2 (abundance ratio Model full/noEd). This occurs thanks to the more e\ufb03cient chemical and photodesorption. Figure 8 demonstrates several general variable-ED e\ufb00ects that a\ufb00ect major icy species. First, the period between 1.19 and 1.26 Myr di\ufb00ers most because the \ufb01rst ice MLs have formed in Model noEd but not on the grains of Model full. The synthesis of H2O, NH3, CO2 and CH3OH depends on intermediate radical species with hydrogen bonds (OH, NH, NH2, CH2OH). The non-existence of strong H-bonds on bare surface and the resulting e\ufb03cient chemical desorption (Table 6) is what delays the accumulation of the ice layer in Model full. Second, the lower overall ED in ices continues to heighten fCD and Ypd for the remainder of cloud evolution, ensuring higher gas-phase abundances for CO, CO2, N2 and CH4in Model full . A ED,CH4 lowered by about 100 K means that desorption by the H+H surface reaction heat works on methane in Model full but not in Model noEd. The third e\ufb00ect is chemistry in bulk-ices, which is the primary place of synthesis for hydrogen peroxide H2O2 and also contributes to the formation of CO2. Bulk-ice synthesis becomes possible only when >1 MLs of ice have formed. It switches on rapidly and has an immediate e\ufb00ect on H2O2 abundances in ice. H2O2 appears also in the gas because our model allows photodesorption of bulk ice equal to up to 3 MLs, in addition to 1 surface ML (see Section 2.6). 0E+0 2E-5 4E-5 6E-5 8E-5 1.2 1.3 1.4 1.5 n/n(H2) t, Myr H2O CO CO2 0.0E+0 5.0E-6 1.0E-5 1.5E-5 2.0E-5 2.5E-5 3.0E-5 1.2 1.4 t, Myr N2 NH3 CH3OH Fig. 9. Evolution of relative abundances for major icy species on grain populations with di\ufb00erent sizes. 3.2.3. Distribution of ices During the \u224850 kyr after the ice formation epoch, residual gas molecules continue to be depleted onto grains. No equilibrium is established because gas density continues to increase rapidly. The \ufb01nal ice composition signi\ufb01cantly di\ufb00ers between separate grain size populations. Figure 9 shows that all grains achieve a similar ice thickness in the range of 70...75 MLs. Notably, at the end of the simulation with variable ED, the smallest grains carry 59 % of all CO2 and 40 % of CO ice, while other icy species are more evenly distributed between all grain size bins. These results can be explained primarily by the higher temperature of the small grains, which makes CO2 surface synthesis faster and possible for a longer time in cloud evolution. Because a single CO2 molecule is formed instead of two molecules Article number, page 11 of 15 \fA&A proofs: manuscript no. Edese 37nm 58nm 92nm 146nm 232nm surface (1) other (4) other (2) other (3) H2O (2) H2O (3) H2O (4) CH3OH (4) CO2 (4) CO (4) N2 (4) CH3OH (3) CO2 (3) CO (3) N2 (3) CO2 (2) CH3OH (2) CO (2) N2 (2) Fig. 10. Distribution of major icy species in di\ufb00erent grain size bins, indicated by their size (nm) at \ufb01nal time t = 1.55 Myr. The four ice layers are numbered: layer (1) is the surface, while layer (4) is adjacent to the refractory grain core. Species within a single layer are intermixed, they are shown separately here for illustrating their proportions within the layer. Although thickness is similar for all bulk-ice layers for a given grain size, the species in the outer layers are more abundant because the grain has grown. This e\ufb00ect is more pronounced for the smallest grains. Most of the \u201cother\u201d molecules are NH3 and also CH4. of H2O and CO, ice layer on small-grains is not the thickest (72 MLs at t = 1.55 Myr). Concentration of CO2 on smaller grains occurs also in other multigrain models (Pauly & Garrod 2016; Iqbal & Wakelam 2018). Figure 10 illustrates the proportions of icy species in the grain size bins at simulation end time t=1.55 Myr. The smallest 0.037 \u00b5m carry the highest amount of ice \u2013 37 % of all molecules, compared to only 9 % on the largest 0.232 \u00b5m grains. Unlike other simulations, especially two-phase models without bulk-ice layers, such as Iqbal & Wakelam (2018) or Sipil\u00e4 et al. (2020), cosmic-ray induced desorption has no major e\ufb00ects on the distribution of ices between grains of di\ufb00erent sizes. 3.3. Effects of ED-dependent chemicaland photodesorption Photodesorption and chemical desorption (Sections 2.6 and 2.7) are two mechanisms whose yields quantitatively depend on ED and are calculated separately for each surface molecule in each of the \ufb01ve grain size bins in the program. Thus, fCD and Ypd change along with ED, whose variation is described in Sections 2.4 and 2.5. In other words, chemical desorption and photodesorption are instruments that help communicate the variations in ED to the gas-ice balance. The e\ufb03ciency of these mechanisms is anchored in experimental data and often is about an order higher than the safe assumptions applied in astrochemical models during preceding decades. For comparison with the Model full, we ran a simulations with the following changes: \u2013 Model noPD, where the ED-dependent photodesorption yield was replaced with a single constant value Ypd = 0.001 for all species; \u2013 Model noCD, where the ED-dependent chemical desorption e\ufb03ciency was replaced with constant 3 % of all surface reaction products going to the gas phase (fCD = 0.03; a simpli\ufb01ed version of the reactive desorption by Garrod et al. 2006). This fCD is signi\ufb01cantly higher than that used by Garrod et al. (2022). The above means that in Models noPD and noCD, photoor chemical desorption are not disabled, only signi\ufb01cantly reduced for simple icy molecules. The \ufb01xed fCD and Ypd values are close to previously commonly used desorption parameters. For complex molecules with high ED and high number of atoms, these \ufb01xed desorption e\ufb03ciencies are actually higher, when compared to the the fully ED-dependent Model full. This is why Model noCD shows a peak gas-phase methanol abundance of 1.5\u00d710\u22128 relative to H2, which is four times higher than that of Model full (Figure 11). In Model noPD, the \ufb01rst ice ML is formed at 1.19 Myr on the smallest grains and 1.23 Myr on the largest grains, compared to 1.23...1.24Myr for Model full. Such a moderately earlier ice layer formation is associated with build-up of solid CO2 and H2O ices, which have translucent stage abundances higher by factors of 3 and 2 relative to Model full, respectively, thanks to their lower Ypd on bare grains. Because CO2 formation occurs mostly on the smallest grains thanks to their higher Td, the \ufb01rst ice layer on the small grains in Model noPD forms 40 kyr earlier than in Model full. For other grain size bins this di\ufb00erence is 16 kyr. Thanks to this advantage, the smallest grains grow the thickest ice, 77 MLs, compared to 65..70 MLs for other grain size bins in Model noPD. Once the \ufb01rst layer has formed, ice mass in Model full catches up with Model noPD within 0.2 Myr because accretion dominates over desorption in the dense core. Table 7 shows that the effect of our ED-dependent photodesorption approach is moderate, with H2O:CO:CO2:CH3OH:NH3 \ufb01nal ice abundance ratio being 100:60:39:10:3 in Model noPD and 100:63:34:10:3 in Model full. Figure 11 shows that changes introduced by chemical desorption are more pronounced than those of photodesorption. The \ufb01rst ice ML in Model noCD forms already at t = 1.03 Myr and on the largest, not smallest grains, which is the case in other models. Such a reverse trend can be explained by more e\ufb03cient hydrogenation of surface O on the lower-temperature large grains along with the fact that H2O synthesis rate is not diminished by an e\ufb03cient chemical desorption in Model noCD. The abundance of atomic H on grains is in the vicinity of 10\u221212 for all grain size bins in most models. Consequently, the small 0.037\u00b5m grains achieve their \ufb01rst ice ML only at 1.08 Myr and grow the thinnest ice layer of 66 MLs, while for other grain size bins the end thickness is similar at 72...75MLs. Article number, page 12 of 15 \fJuris Kalv\u00af ans et al.: A multigrain-multilayer astrochemical model with variable desorption energy for surface species 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.0 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr O2 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.0 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr NH3 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.0 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr H2O 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.0 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CO 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.0 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CH3OH 0 10 20 30 40 50 60 70 80 1.0 1.1 1.2 1.3 1.4 1.5 b, MLs t, Myr ice thickness full noCD 1E-10 1E-9 1E-8 1E-7 1E-6 1E-5 1E-4 1.0 1.1 1.2 1.3 1.4 1.5 n/n(H2) t, Myr CO2 Fig. 11. Ice growth and chemical desorption. Comparison of abundances between Model full gas (solid lines) and icy (dashed lines) species with those of Model noCD (dotted and dash-dotted lines for gas and ices, respectively). In the ice thickness plot, thicker lines are for larger grains, according to Table 3. The rapid ice accumulation in Model noCD means that the \ufb01rst ice ML forms already at AV = 1.1 mag. In Model full this happens only at AV = 1.8 mag. Observable water ice \ufb01rst appears at 3.2 mag extinction along the line-of-sight (Whittet et al. 2001). Our model is too simple to discern, which of the two mechanisms \u2013 the high-e\ufb03ciency chemical desorption (Minissale et al. 2016a) or the low-e\ufb03ciency reactive desorption (Garrod et al. 2006) \u2013 is more consistent with observations because it depends on a number of factors, such as cloud history, its density, irradiation intensity and geometry, and lifetime of the translucent and dark core stages (see Hocuk et al. 2016). The earlier ice accumulation in Model noCD starts at Td in the range of 12...16K (a degree higher than in Model full) and at about 1.5 times higher interstellar irradiation. Both of these aspects promote surface oxidation of CO, resulting in higher initial CO2 ice abundances. At later and colder stages, rapid surface synthesis of H2O wins the competition for surface O and OH because of the ine\ufb03ciency of the chemical desorption of OH and H2O in hydrogenation reactions in Model noCD. Oxygen chemistry in Model noCD is notably changed by the appearance of surface O2. The early accumulation of O atoms on relatively warm grains in combination with ine\ufb03cient hydrogenation allows them to combine with the newly formed O2 mostly remaining on the surface (see also Pauly & Garrod 2016). Oxygen ice takes up 3.4 % of all oxygen budget, an order of magnitude higher than in Model full. Abundant O2, together with a 60 % higher abundance of H2O2, reduces the abundance of water ice by one \ufb01fth, which consequently increases the ratios of carbon oxide ices relative to H2O. The absolute abundances of CO and CO2 have changed only within 7 % relative to Model full. The H2O:CO:CO2:CH3OH:NH3:O2 \ufb01nal ice abundance ratio in Model noCD is 100:73:44:10:7:5. This ratio and Figure 11 reveals another important result \u2013 the lack of e\ufb00ective chemical desorption allows the formation of ammonia ice with H2O:NH3 ratios closer to the \u224810 % value indicated by observations (Boogert et al. 2011). Thus, our results predict that chemical desorption e\ufb03ciency of nitrogen hydrogenation products NH, NH2 and NH3 should be about an order of magnitude lower than indicated by the method of Minissale et al. (2016a). Chemical desorption also has a signi\ufb01cant e\ufb00ect in the pseudo-time dependent model const, where, when the Minissale et al. (2016a) chemical desorption is replaced with fCD = 3 %, the \ufb01rst full ice ML forms already at t = 2 kyr, while 90 % freeze-out is reached about 0.1 Myr earlier. As far as we know, this is the \ufb01rst published study of a multigrain astrochemical model considering e\ufb03cient chemical desorption based on the work of Minissale et al. (2016a). While there are few other similar \u201c\ufb01rsts\u201d, combining multiple grain size bins with chemical desorption is important. In multigrain models, grain surface area is higher by about a factor of two, allowing for an earlier accretion of ices. This means accretion at higher gas temperatures of around 20 K, with higher thermal velocities, which makes accretion even more rapid. Small grains increase their surface area with each adsorbed ice ML, leading to a possibility of runaway freeze-out in multigrain models. When chemical desorption is added, it delays the formation of the \ufb01rst ice ML on the bare grain, where it is most e\ufb00ective. Further ice growth continues to be hampered because tens of per cent of hydrogenation reaction products going to the gas phase. Therefore, for multigrain models considering bulk-ice (which means that a major part of ice is isolated from desorption) and e\ufb03cient chemical desorption, a completely di\ufb00erent gas-grain dynamics occurs, which produces results that can be super\ufb01cially similar to those of much simpler models. 4. Conclusions The inclusion of chemical desorption (Minissale et al. 2016a) in the multigrain-multilayer gas-surface chemical model turned out to be of major importance. In e\ufb00ect, chemical desorption decreases the rate of icy molecule synthesis, delaying the formation of the ice layer by almost 0.2 Myr, as demonstrated by the comparison of Models full and noCD in Section 3.3. This aspect is not readily apparent in pseudo-time dependent models, Article number, page 13 of 15 \fA&A proofs: manuscript no. Edese such as Vasyunin et al. (2017) and Rawlings & Williams (2021) and has been largely missed so far. Variable ED has its most visible e\ufb00ect on the abundances of major icy species by increasing the amount of CO2 at the expense of CO and H2O. The \ufb01nal calculated relative ice abundances H2O:CO:CO2 were 100:63:34, in agreement with heavily shielded dense cores (Whittet et al. 2011; Boogert et al. 2013). The removal of ED-dependence for one or two molecular-level processes does not disrupt calculated ice composition. The latter \ufb01nding indicates that the \u201cmysterious\u201d mechanism that regulates the balance between water and carbon oxide ices, sought by authors such as Nejad & Williams (1992); Bergin et al. (1995); Roberts et al. (2007); Kalv\u00af ans (2015) is e\ufb00ective and EDdependent chemical desorption, photodesorption and desorption by H+H surface reactions, all together. If one of these mechanisms is ine\ufb00ective in a given cloud core, others can partially o\ufb00set it, retaining the characteristic H2O > [CO \u2248CO2] ice abundance sequence. The latter aspect explains the ubiquity of the H2O:CO:CO2 ice ratio observations (Gibb et al. 2004; Whittet et al. 2007). Our model is unable to explain a separate problem \u2013 the relatively low depletion of CO in interstellar clouds (Leger 1983; Leger et al. 1985; Whittet et al. 2010). Other mechanisms that can contribute to the balance between water and carbon oxides are photodesorption by infrared photons (Williams et al. 1992; Dzegilenko & Herbst 1995; Santos et al. 2023) and cosmic-ray induced desorption (Section 2.1), which was considered but does not have a great e\ufb00ect. An important role is played by grain size distribution, with smaller grains being warmer and more e\ufb03cient at producing CO2 ice. The abundance of solid CH3OH, in combination with desorption facilitated by lower ED in our model, is su\ufb03cient to explain its observed gas-phase abundance in dark cores (Bacmann et al. 2012; Cernicharo et al. 2012). This is not true for most other COMs, probably due to a limited reaction network. Regarding NH3, underproduction of ammonia in Model full might be explained by an over-e\ufb00ective chemical desorption of hydrogen nitrides in our model. A lower fCD for these molecules is also supported by Sipil\u00e4 et al. (2019). The chemistry of COMs and nitrogen both merit detailed investigation with this model. Summarizing, the model combines ten features, which are gradual steps in theoretical (surface) astrochemistry: several grain size bins (1), bulk ice (2) that is chemically active (3) and consists of several separate layers (4), experiment-based estimates for chemical (5) and photodesorption (6), updated estimates for cosmic-ray induced desorption (7) and desorption by surface H atom combination reaction heat (8), a general approach for adjusting ED on bare grains (9) and a method for estimating ED in non-polar icy environment (10). While any one of these features may not contribute much and its method can be improved, all together they bring new understanding on how interstellar grain surface chemistry operates, as indicated by previous studies that have investigated many of these features separately (Section 1). Features (6), (9) and (10) are described in a novel way in this study. The above-mentioned delay in ice formation occurs because of combining features (5) and (9), assuming carbonaceous grain surface. We plan to apply this model in further and more speci\ufb01c studies of problems in astrochemistry. Acknowledgements. This research is funded by the Latvian Science Council grant \u201cDesorption of icy molecules in the interstellar medium (DIMD)\u201d, project No. lzp-2021/1-0076. JK thanks Ventspils City Council for support. This research has made use of NASA\u2019s Astrophysics Data System." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.06139v1", |
| "title": "DiffHarmony: Latent Diffusion Model Meets Image Harmonization", |
| "abstract": "Image harmonization, which involves adjusting the foreground of a composite\nimage to attain a unified visual consistency with the background, can be\nconceptualized as an image-to-image translation task. Diffusion models have\nrecently promoted the rapid development of image-to-image translation tasks .\nHowever, training diffusion models from scratch is computationally intensive.\nFine-tuning pre-trained latent diffusion models entails dealing with the\nreconstruction error induced by the image compression autoencoder, making it\nunsuitable for image generation tasks that involve pixel-level evaluation\nmetrics. To deal with these issues, in this paper, we first adapt a pre-trained\nlatent diffusion model to the image harmonization task to generate the\nharmonious but potentially blurry initial images. Then we implement two\nstrategies: utilizing higher-resolution images during inference and\nincorporating an additional refinement stage, to further enhance the clarity of\nthe initially harmonized images. Extensive experiments on iHarmony4 datasets\ndemonstrate the superiority of our proposed method. The code and model will be\nmade publicly available at https://github.com/nicecv/DiffHarmony .", |
| "authors": "Pengfei Zhou, Fangxiang Feng, Xiaojie Wang", |
| "published": "2024-04-09", |
| "updated": "2024-04-09", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "DiffHarmony: Latent Diffusion Model Meets Image Harmonization", |
| "main_content": "INTRODUCTION Image composition faces a notable hurdle in achieving a realistic output, as the foreground and background elements may exhibit substantial differences in appearance due to various factors such as brightness and contrast. To address this challenge, image harmonization techniques can be employed to ensure visual consistency. In essence, image harmonization entails refining the appearance of the foreground region to align seamlessly with the background. The rapid advancements in deep learning approaches [1\u201312] have contributed significantly to the progress of the image harmonization task. The input for the image harmonization task consists of a composite image and a foreground mask used to distinguish between the foreground and background, with the output being a harmonized image. In other words, both the input and output of the image harmonization task are in image format. Therefore, it can \u2217Corresponding Author. be viewed as an image-to-image translation task. Recently, diffusion models [13\u201315] have significantly advanced the progress of image-to-image translation tasks. For instance, Chitwan et al. [16] proposed Palette, which is a conditional diffusion model that establishes a new SoTA on four image-to-image translation tasks, namely colorization, inpainting, uncropping, and JPEG restoration. Hshmat et al. [17] proposed SR3+, which is a diffusion-based model that achieves SoTA results on blind super-resolution task. Directly applying the above diffusion models to the image harmonization task faces the significant challenge of enormous computational resource consumption due to training from scratch. For instance, Palette is trained with a batch size of 1024 for 1M steps and SR3+ is trained with a batch size of either 256 or 512 for 1.5M steps. To address this issue, a straightforward approach is to construct an image harmonization model based on an off-the-shelf latent diffusion model [18]. Since the images generated by latent diffusion trained on large-scale datasets are mostly harmonious, the image harmonization model built on top of it can converge quickly. However, applying a pre-trained latent diffusion model to image harmonization task also faces a significant challenge, which is the reconstruction error caused by the image compression autoencoder. The latent diffusion model takes as its input a feature map of an image that has undergone KL-reg VAE encoding (compressing) process, resulting in a reduced resolution of 1/8 relative to the original image. In other words, if a 256px resolution image and mask are inputted into the latent diffusion model, it will process a feature map and mask with resolution of only 32px. This makes it difficult for the model to reconstruct the content of the image, especially in the case of faces, even if it can generate harmonious images. Jiajie et al. [19] tried to build an image harmonization model on the pre-trained Stable Diffusion model, but did not consider this issue, they could only obtain results that were significantly worse than SOTA. To address this issue, in this paper, we construct an image harmonization model called DiffHarmony based on a pre-trained latent diffusion model. DiffHarmony tends to generate harmonious but potentially blurry initial images. Therefore, we propose two simple but effective strategies to enhance the clarity of the initially harmonized images. One is to resize the input image to higher resolution to generate images with a higher resolution during inference. The second is to introduce an additional refinement stage that utilizes a simple UNet-structured model to further alleviate the image distortion. Overall, the main contribution of this work is twofold. First, a method is proposed to enable the pre-trained latent diffusion models to achieve SOTA results on the image harmonization task. Secondly, a wealth of experiments are designed to analyze the arXiv:2404.06139v1 [cs.CV] 9 Apr 2024 \f, , Pengfei Zhou, Fangxiang Feng, and Xiaojie Wang advantages and disadvantages of applying the pre-trained latent diffusion models to the image harmonization task, providing a basis for future improvements. 2 METHOD In this section, we first present the process of modifying a pretrained latent diffusion model, i.e. Stable Diffusion, to do image harmonization task. Then, we elucidate the techniques to mitigate image distortion issue. The overall architecture of our method is displayed as Figure 1. Figure 1: Architecture of our method. In the harmonization stage involving DiffHarmony, composite image \ud835\udc3c\ud835\udc50and foreground mask \ud835\udc40are concatenated as image condition after encoded through VAE and downsample respectively. The diffusion model performs inference, and the output is mapped back to image space through VAE decoder, resulting \u02dc \ud835\udc3c\u210e. In the refinement stage, we scale down \u02dc \ud835\udc3c\u210e, \ud835\udc3c\ud835\udc50, \ud835\udc40and concatenate them together as input to refinement model. After adding refinement model output to downscaled \u02dc \ud835\udc3c\u210e, final refined image, \ud835\udc3c\u210eis obtained. 2.1 DiffHarmony: Adapting Stable Diffusion In typical image harmonization task setup, one needs to input a composite image \ud835\udc3c\ud835\udc50along with its corresponding foreground mask \ud835\udc40. Model output is harmonized image \ud835\udc3c\u210e. Due to this workflow, image harmonization can be categorized as conditional image generation task, thus we can try to utilize pretrained image generation model. Stable Diffusion is the most suitable choice as it\u2019s open source, pretrained on a large amount of diverse data, and already capable of generating images with reasonable content and lighting. However we need to do two adaptations : 1) add additional input \ud835\udc3c\ud835\udc50and \ud835\udc40to Stable Diffusion model ; 2) use null text input (cause text information is not available in traditional harmonization task). 2.1.1 Inpainting Variation. Referring to previous image conditioned diffusion models[20\u201322], we can extend dimension of the input channel by concatenating image conditions and noisy image input. In image harmonization, the conditions are \ud835\udc3c\ud835\udc50and \ud835\udc40. Stable Diffusion inpainting suits our needs. It incorporates additional input channels for masks and masked images and is specifically fine-tuned to do image inpainting task and, same as image harmonization, it generates new foreground content while keeping background part unchanged. 2.1.2 Null Text Input. In the actual generation process, Stable Diffusion typically employs Classifier-Free Guidance (CFG)[23] technique. To perform CFG during inference one needs to train both an unconditional denoising diffusion model \ud835\udc5d\ud835\udf03(\ud835\udc67) (parameterized as \ud835\udf16\ud835\udf03(\ud835\udc67)) and a conditional denoising diffusion model \ud835\udc5d\ud835\udf03(\ud835\udc67|\ud835\udc50) (parameterized as \ud835\udf16\ud835\udf03(\ud835\udc67|\ud835\udc50)). In practice, we use a single neural network to incorporate both. For the unconditional part, we can simply input an empty token \u2205, i.e., \ud835\udf16\ud835\udf03(\ud835\udc67) = \ud835\udf16\ud835\udf03(\ud835\udc67,\ud835\udc50= \u2205). During inference, we use the formula \u02dc \ud835\udf16\ud835\udf03(\ud835\udc67,\ud835\udc50) = (1 +\ud835\udc64) \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc67,\ud835\udc50) \u2212\ud835\udc64\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc67) to obtain noise estimations for each step. In image harmonization task, we utilize the unconditional part of Stable Diffusion by inputting only the image conditions while leaving the text empty. 2.2 Alleviate Image Distortion Stable Diffusion uses its VAE encoder to compress image to a lowerresolution upon which the diffusion part does training and inference. The denoised output is mapped back to image space through VAE decoder. When the image resolution is too low, severe image distortion occurs. It can lead to visibly altered object shapes or fluctuations in surface textures. Since image harmonization tasks typically use pixel-level evaluation metrics (e.g., mean squared error), these artifacts can significantly impact the model\u2019s overall performance. 2.2.1 Harmonization At Higher Resolution. We propose using higher-resolution image inputs for DiffHarmony. In previous work models are typically trained and evaluated at resolution of 256px, but we notice that the image distortion problem becomes excessively severe, which limits the upper bound of image generation quality. Besides, performing inference with Stable Diffusion at 256px does not yield reasonable outputs since it\u2019s trained exclusively on 512px images. So we perform inference at 512px or higher resolution. To be consistent with other models in evaluation, we subsequently scale them down to 256px. 2.2.2 Add Refinement Stage. To further mitigate the image distortion issue, we introduce an additional refinement stage to enhance the output of DiffHarmony. After harmonization stage, we got \u02dc \ud835\udc3c\u210e. Then, the refinement stage makes \u02dc \ud835\udc3c\u210esmoother and repair its texture. We also input \ud835\udc3c\ud835\udc50and \ud835\udc40together because they provide information of texture and shape in uncorrupted image. All inputs are scale down to 256px and concatenated along channel dimension. We introduce skip connection between input \u02dc \ud835\udc3c\u210eand output \ud835\udc3c\u210e, allowing model to learn the residual instead of outputing refined image directly, which accelerates training convergence. 3 EXPERIMENT 3.1 Experiment Settings 3.1.1 Dataset. We use iHarmony4[4] for training and evaluation. iHarmony4 consists of 73,146 image pairs and comprises four subsets: HAdobe5k, HFlickr, HCOCO, and Hday2night. Each sample is composed of a natural image, a foreground mask, and a composite \fDiffHarmony: Latent Diffusion Model Meets Image Harmonization , , Dataset Metric Composite DIH[3] S2AM[24] DoveNet[4] BargainNet[25] Intrinsic[26] RainNet[27] iS2AM[7] D-HT[6] SCS-Co[28] HDNet[10] Li[19] \ud835\udc52\ud835\udc61\ud835\udc4e\ud835\udc59. Ours HCOCO PSNR\u2191 33.94 34.69 35.47 35.83 37.03 37.16 37.08 39.16 38.76 39.88 41.04 34.33 41.25 MSE\u2193 69.37 51.85 41.07 36.72 24.84 24.92 29.52 16.48 16.89 13.58 11.60 59.55 9.22 fMSE\u2193 996.59 798.99 542.06 551.01 397.85 416.38 501.17 266.19 299.30 245.54 153.60 HAdobe5k PSNR\u2191 28.16 32.28 33.77 34.34 35.34 35.20 36.22 38.08 36.88 38.29 41.17 33.18 40.29 MSE\u2193 345.54 92.65 63.40 52.32 39.94 43.02 43.35 21.88 38.53 21.01 13.58 161.36 17.78 fMSE\u2193 2051.61 593.03 404.62 380.39 279.66 284.21 317.55 173.96 265.11 165.48 107.04 HFlickr PSNR\u2191 28.32 29.55 30.03 30.21 31.34 31.34 31.64 33.56 33.13 34.22 35.81 29.21 36.99 MSE\u2193 264.35 163.38 143.45 133.14 97.32 105.13 110.59 69.97 74.51 55.83 47.39 224.05 29.68 fMSE\u2193 1574.37 1099.13 785.65 827.03 698.40 716.60 688.40 443.65 515.45 393.72 199.59 Hday2night PSNR\u2191 34.01 34.62 34.50 35.27 35.67 35.69 34.83 37.72 37.10 37.83 38.85 34.08 38.35 MSE\u2193 109.65 82.34 76.61 51.95 50.98 55.53 57.40 40.59 53.01 41.75 31.97 122.41 24.94 fMSE\u2193 1409.98 1129.40 989.07 1075.71 835.63 797.04 916.48 590.97 704.42 606.80 502.40 Average PSNR\u2191 31.63 33.41 34.35 34.76 35.88 35.90 36.12 38.19 37.55 38.75 40.46 32.70 40.44 MSE\u2193 172.47 76.77 59.67 52.33 37.82 38.71 40.29 24.44 30.30 21.33 16.55 141.84 14.29 fMSE\u2193 1376.42 773.18 594.67 532.62 405.23 400.29 469.60 264.96 320.78 248.86 151.42 Table 1: Quantitative comparison across four sub-datasets of iHarmony4 and in general. Top two performance are shown in red and blue. \u2191means the higher the better, and \u2193means the lower the better. image. Following [4] , we split the iHarmony4 dataset into training and test sets, containing 65,742 and 7,404 image pairs respectively. 3.1.2 Implementation Detail. We trained our DiffHarmony model based on the publicly available Stable Diffusion inpainting model checkpoint on HuggingFace 1. We use the Adam optimizer with \ud835\udefd1 = 0.9, \ud835\udefd2 = 0.999. We employ exponential moving average (EMA) to save model weights, with a decay rate of 0.9999. We use global batch size 32. We initially train the model for 150,000 steps with a learning rate of 1e-5, then reduce the learning rate to 1e-6 and continue our training for additional 50,000 steps. Data augmentations including random resized crop and random horizontal flip are applied. All images are resized to 512px. During training, we use the same noise schedule as the Stable Diffusion model, but use Euler ancestral discrete scheduler [29] to generate the samples in only 5 steps during inference. Our refinement model is based on the U-Net architecture. \u02dc \ud835\udc3c\u210e are generated at 512px resolution then downscaled to 256px. The harmonization stage can produces diverse results for the same input, which serves as a way of data augmentation during training of the refinement model. 3.1.3 Evaluation. In accordance with [4, 25, 27], we use the Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), and Foreground MSE (fMSE) metrics on the RGB channels to evaluate the harmonization results. fMSE only calculates the MSE within the foreground regions, providing a measure of foreground harmonization quality. 3.2 Performance Comparison 3.2.1 Qualitative Results. We conduct detailed analysis of model performance and compare qualitatively with previous competing methods. Our method has achieved better visual consistency compared to other approaches as shown in Figure 2. 3.2.2 Quantitative Results. Table 1 presents the quantitative results. From Table 1, it is evident that our method achieves superior results on most of the sub-datasets. While our method exhibits slightly lower PSNR compared to HDNet, this may be attributed to HDNet using the ground truth background as input during both 1https://huggingface.co/runwayml/stable-diffusion-inpainting training and inference. Our method demonstrates significant performance improvements on more challenging subsets HFlickr and Hday2night, indicating gains from pre-trained models for learning in domains with limited data. Li \ud835\udc52\ud835\udc61\ud835\udc4e\ud835\udc59.[19] also use Stable Diffusion to do image harmonization task, but they employ a ControlNet-based[30] approach. As can be seen from Table 1, our method is far more advantageous. 3.3 Ablation Study inf res refine PSNR\u2191 MSE\u2193 fMSE\u2193 512px \u2718 37.65 26.14 290.66 512px \u2714 39.47 19.59 205.07 1024px \u2718 40.12 15.56 166.19 1024px \u2714 40.44 14.29 151.42 Table 2: Ablation study on using different input resolution and w/wo refinement stage. 3.3.1 Higher Resolution At Inference. Table 2 shows the changes on overall performance when input different resolutions images during harmonization stage. It is obvious that increasing input resolution from 512px to 1024px results in a significant improvement in all metrics, which is reasonable, as higher-resolution input images lead to less information compression. 3.3.2 Refinement Stage. We conduct experiments of inference with and without refinement stage. As shown in Table 2, adding the refinement stage results in an improvement in the overall performance. The benefit of introducing refinement stage is more prominent when the harmonization stage uses lower image resolutions, as the refinement stage and using higher resolution input both aim to address the issue of image distortion, and they complement each other. 3.3.3 Randomness. DiffHarmony in our method is essentially an generative model, but in harmonization task, we usually do not want possible pixel value to vary too much. Therefore, we conduct analysis of randomness. We obtain five groups of results based on five different random seeds, and calculate their mean and std. As shown in Table 3, the model exhibits small variances, indicating that the harmonization results generated by our method are stable. \f, , Pengfei Zhou, Fangxiang Feng, and Xiaojie Wang Figure 2: Qualitative comparison on samples from the test set of iHarmony4. PSNR\u2191 MSE\u2193 fMSE\u2193 37.66 \u00b1 0.02 25.44 \u00b1 0.31 291.03 \u00b1 2.08 Table 3: Randomness analysis. Although essentially a generative model, our method can produce stable harmonized results. 3.4 Advanced Analysis A noticeable fact is that DiffHarmony uses 512px images during training, while other harmonization models are trained in resolution of 256px. To investigate the impact of this strategy on other models, we select the current state-of-the-art model, HDNet, and train it with 512px images, resulting HDNet512. During test, we use 1024px resolution images as input, then scale harmonization results down to 256px for metric calculation. Our preliminary results show that compared to our method, HDNet512 achieves better PSNR and fMSE but slightly worse MSE. This is counterintuitive. We speculate that our method performs better on samples with larger foreground regions, leading to an overall improvement in MSE. To verify this hypothesis, following HDNet[10], we divide data into three ranges based on the ratio of the foreground region area and the entire image: 0% \u223c5%, 5% \u223c15%, and 15% \u223c100%. We calculate metrics for each range respectively. Our results, as shown in Table 4, reveal that our method is worse than HDNet in the 0% \u223c5% data range but outperforms it in the 15% \u223c100% data range. Once again, we emphasize that this arises from the higher information compression loss. However, Model 0% \u223c5% 5% \u223c15% 15% \u223c100% HDNet512 PSNR: 45.64 PSNR: 39.97 PSNR: 34.59 MSE: 3.16 MSE: 11.33 MSE: 47.19 fMSE: 143.93 fMSE: 129.87 fMSE: 152.01 Ours PSNR: 43.28 PSNR: 39.55 PSNR: 34.80 MSE: 4.46 MSE: 11.90 MSE: 40.47 fMSE: 173.10 fMSE: 126.69 fMSE: 128.45 Table 4: Comparison between HDNet trained with highresolution images and our method. HDNet512 is trained with 512px images, and the inputs are 1024px images during inference. This is exactly the same as the experimental setting of our method. it\u2019s potential that our method can achieve more advanced results with higher image resolution or using better pre-trained diffusion models. 4 CONCLUSION In this paper, we propose a solution to achieve SOTA results on image harmonization task based on the Stable Diffusion model. In order to solve the problem of compression loss caused by the VAE in latent diffusion models, we design two effective strategies: utilizing higher-resolution images during inference and incorporating an additional refinement stage. In addition, detailed experimental analysis shows that compared with the SOTA method, our method shows obvious advantages when the foreground area is \fDiffHarmony: Latent Diffusion Model Meets Image Harmonization , , large enough. This is a strong evidence that our model\u2019s superior harmonization performance compensates its reconstruction loss, laying a solid foundation for research on image harmonization task using diffusion models." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.10573v2", |
| "title": "AAVDiff: Experimental Validation of Enhanced Viability and Diversity in Recombinant Adeno-Associated Virus (AAV) Capsids through Diffusion Generation", |
| "abstract": "Recombinant adeno-associated virus (rAAV) vectors have revolutionized gene\ntherapy, but their broad tropism and suboptimal transduction efficiency limit\ntheir clinical applications. To overcome these limitations, researchers have\nfocused on designing and screening capsid libraries to identify improved\nvectors. However, the large sequence space and limited resources present\nchallenges in identifying viable capsid variants. In this study, we propose an\nend-to-end diffusion model to generate capsid sequences with enhanced\nviability. Using publicly available AAV2 data, we generated 38,000 diverse AAV2\nviral protein (VP) sequences, and evaluated 8,000 for viral selection. The\nresults attested the superiority of our model compared to traditional methods.\nAdditionally, in the absence of AAV9 capsid data, apart from one wild-type\nsequence, we used the same model to directly generate a number of viable\nsequences with up to 9 mutations. we transferred the remaining 30,000 samples\nto the AAV9 domain. Furthermore, we conducted mutagenesis on AAV9 VP\nhypervariable regions VI and V, contributing to the continuous improvement of\nthe AAV9 VP sequence. This research represents a significant advancement in the\ndesign and functional validation of rAAV vectors, offering innovative solutions\nto enhance specificity and transduction efficiency in gene therapy\napplications.", |
| "authors": "Lijun Liu, Jiali Yang, Jianfei Song, Xinglin Yang, Lele Niu, Zeqi Cai, Hui Shi, Tingjun Hou, Chang-yu Hsieh, Weiran Shen, Yafeng Deng", |
| "published": "2024-04-16", |
| "updated": "2024-04-17", |
| "primary_cat": "cs.AI", |
| "cats": [ |
| "cs.AI", |
| "cs.CE", |
| "q-bio.BM" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "AAVDiff: Experimental Validation of Enhanced Viability and Diversity in Recombinant Adeno-Associated Virus (AAV) Capsids through Diffusion Generation", |
| "main_content": "Introduction Recombinant Adeno-associated virus vectors (rAAV) have emerged as crucial components in the field of gene therapy. Since 2017, there have been six new gene therapy products approved, and 1 arXiv:2404.10573v2 [cs.AI] 17 Apr 2024 \fover 2000 pipelines are registered, underscoring the significance of rAAV in clinical applications [1]. However, all six of the approved products employ capsid sequences that originate from wildtype viruses found in natural resources. Although these capsids derived from wild-type viruses exhibit broad tropism during treatment, rendering them non-specific in targeting pathogenic cells, their efficiency in transduction and gene expression within target cells is suboptimal. Consequently, they prove inadequate for the treatment of diseases primarily affecting specific tissues such as the central nervous system, muscle, and heart. Thus, there exists a consensus within the scientific community to advance the development of enhanced vectors with improved specificity and transduction efficiency. Several methods have been established to design and evaluate novel capsids, and one promising approach is the design and screening of capsid libraries. This approach involves the creation of a pool of capsid-encoding DNA, which can be designed either rationally or randomly [2][3][4][5][6]. These DNA sequences are integrated into VP expression cassettes to facilitate vector production. The plasmid-to-cell ratio is meticulously fine-tuned during vector manufacturing to promote the production of a specific capsid variant, which encapsulates its own genome [7],[8]. Subsequently, this pool of vectors is generated and injected into a selection model as a mixture. The resulting DNA signal delivered to the target cells is then retrieved and sequenced, representing the capsid variants that effectively transduce the target cells. A common approach in library design involves either the insertion of a randomized peptide or the random mutation of specific amino acids within a tolerant domain [4]. Notably, a prominent variant that has emerged through this library screening approach is PHP.B [9]. However, its ability to cross the blood-brain barrier (BBB) is not maintained during the translation of studies from mice to humans since the receptor for the PHP.B variant in brain microvascular endothelial cells is specific to particular mouse strains [10]. Several studies utilize a similar strategy by inserting 7 random amino acids within the capsid hypervariable VIII region; however, a clear frontrunner for clinical use has not yet emerged. Although the AAV VP coding region consists of approximately 720 amino acids, the insertion of 7 amino acids represents only a small fraction. Mutations in larger regions show promise in addressing diverse requirements. Nevertheless, even with 7 amino acid random mutations, the number of variants in the library can impose limitations on bacterial transformation, clone numbers, and vector manufacturing. Furthermore, the number of dosing iterations in a selection model becomes constrained. Therefore, it is crucial to explore broader mutational landscapes while ensuring the library sizes remain manageable. Not all sequences resulting from capsid mutation can effectively express protein, assemble into a particle, and efficiently encapsulate their genome like the wild-type sequence. As the number of mutations in a VP increases, the sequence search space expands exponentially, making it impossible to filter through experimental means, resulting in a decreased likelihood of successful capsid packaging. The development of algorithms that establish a correlation between capsid DNA sequences and packaging efficiency is of utmost importance [11]. Furthermore, low yield resulting from unfavorable physical and chemical properties can impede the clinical and commercialization potential. Attaining efficient and targeted transduction of specific cells poses a significant challenge in capsid engineering. In order to overcome these challenges, researchers have utilized generative algorithms to design and predict the viability of viral vectors, specifically focusing on vector fitness. The most recent approach [12] entails training a binary classifier using a substantial amount of capsid data to ascertain the viability of a given sequence. Subsequently, random sampling is conducted within a mutation subspace that has been randomly partitioned. Samples that are classified as viable by the binary classifier are retained, while non-viable samples are discarded. This iterative filtering process is used to select a collection of capsid sequences with potentially viable properties. The constructed capsid sequence collection using this method has a 2 \fhigher proportion of viable compared to the collection constructed by random mutation. Nevertheless, the ratio of viable sequences is heavily influenced by the performance of the trained binary classifier. Moreover, due to the vast number of possible combinations resulting from sequence mutations (excluding insertions), the combinatorial count reaches 2seqlen, where seqlen represents the sequence length. This renders it impractical to complete the filtering process within a reasonable timeframe given the extensive range of choices. Consequently, during the implementation phase, it is imperative to randomly partition a subspace from this dataset and subsequently conduct the filtering process. Considering that the proportion of genuinely viable sequence samples in the overall sequence space is exceedingly low, there is a high probability of overlooking potential sequences when partitioning the subspace. Therefore, to address this issue, we combined the classification and filtering stages by introducing an end-to-end, diffusedbased generative model that can effectively generate a higher proportion of viable sequences. Moreover, this model functions as a generative model that generates sequences by following the gradient direction to identify viable samples during the generation process. Consequently, this generation method enables the sampling of a greater number of potential viable samples within the designated timeframe.In this study, we employed the model trained using publicly available data on AAV2 to generate a collection of 38,000 highly diverse AAV2 VP sequences. Out of these, 8,000 sequences were randomly chosen and subjected to evaluation for their viral selection values through DNase-resistant capsid assembly testing, which revealed a significant improvement in performance compared to traditional methods [12]. Moreover, the availability of viable data generated from mutations on wild-type capsids of various serotypes is severely limited, and the synthesis process is both time-consuming and expensive. Therefore, the direct generation of data with multiple mutation sites while preserving capsid viability during the mutation process on new serotypes would greatly expedite research in the field of capsids. Building upon this, we transferred the remaining 30,000 samples from the initial 38,000 AAV2-generated sequences to the corresponding domain of AAV9. These sequences will be synthesized into a vector library to assess their actual survival rate. Encouragingly, we observed positive results in terms of yield, and when the number of mutation sites reached 9, we achieved a relatively high proportion of viable samples. In conclusion, the advancement of rAAV vectors with improved specificity, transduction efficiency, and delivery mechanisms presents tremendous potential for gene therapy research. The design and screening of capsid libraries, complemented by generative algorithms, offer a dynamic approach to overcome the limitations of wild-type capsids, bringing us closer to the development of highly efficient and targeted gene therapy vectors. This study represents a significant progression in the field of viral vector design and functional validation, providing innovative solutions to the challenges encountered in gene therapy. Experiments Experiment 1 In order to verify the ability of the diffusion model in AAV capsid sequence design, we performed the following experiments: \u2022 1. Experiment on AAV2 HVR VIII The diffusion model was trained using the data provided in the references[12],[13]. After deduplicating the generated sequences and removing samples that overlapped with the training set, a collection of approximately 38,000 samples remained. Out of these, 8,000 samples were randomly selected for biological activity testing specifically targeting AAV2. \u2022 2. Experiment on Region VIII on AAV9 Proceeding with the remaining 30,000 samples from the previously generated sequences, activity experiments were conducted targeting 3 \fAAV9. The specific methodology involved the direct replacement of the sequence fragment corresponding to region VIII of AAV9 with the generated sequence, followed by subsequent biological activity experiments. Experiment 2 In order to explore the mutation fitness of multiple hypervariable regions on AAV9 serotypes, saturated single mutants were constructed on regions IV, V, and VIII. Results Sequence generated by diffusion model To evaluate the reliability of the sequences generated by the diffusion model, we analyze them from two perspectives. The first perspective involves observing the relationship between the generated sequences and the training set. The second perspective entails experimentally validating the viability of the generated sequences. The relationship between the generated sequences and the training set : The overlap between the feature space of the generated sequences and the training set can be observed from Fig. 1a. Fig. 1bdemonstrates the close match between the length distribution of the sequences generated by the generation model and the length distribution of the training set. Furthermore, the model generates sequences with shorter lengths, including those absent in the training set, such as at positions where the sequence length is 27. The distribution of the number of mutated positions generated by the model, as shown in Fig. 1c, broadly covers all the numbers of mutated positions in the training set. In previous approaches to designing AAV capsid sequences, the design space was limited by the insertion or replacement of one amino acid between adjacent residues. However, the diffusion model does not impose such restrictions and allows for the insertion of one or more amino acids at specific positions. Fig. 1d indicates that the sequences generated by the model exhibit a higher frequency of continuous insertions based on the WT sequence when compared to the training set. This can be attributed to our data augmentation approach, which incorporates continuous deletions and insertions between mutation positions. The method proposed by Dyno Therapeutics [12] for designing highly active sequences involves selecting a seed sequence at a distance of k from the WT sequence. Subsequently, a single mutation is applied to the seed sequence to generate sequences that are at a distance of k+1 from the WT sequence. However, this greedy design approach restricts the diversity of the final sequences. Thus, Fig. 1e illustrates the disparities in the number of clusters between the diffusion model and the CNN model [12] at varying clustering radii. Greater sequence diversity is indicated by a higher number of clusters. The sequences designed by the diffusion model demonstrate slightly higher diversity when compared to those designed by the CNN model. Performance of the generated sequences in terms of viability :For the biological viability experiments on the AAV2 region VIII, a random selection of 8000 samples was chosen from the generated sequences. Fig. 1f illustrates the proportion of viable samples among the sequences generated by the diffusion model, considering different numbers of mutations. The proportion of viable samples exceeds 90% when the number of mutations ranges from 7 to 20, as evident from the observations. Additional detailed information can be found Table S1. The proportion of viable samples is approximately 80% for mutation numbers ranging from 4 to 6. These results clearly demonstrate the robust capability of our model to generate viable sequences. 4 \fFigure 1: a: Distribution of features of sequences generated by the diffusion model compared to the training set. The left graph represents the feature distribution after dimensionality reduction using t-SNE, while the right graph represents the feature distribution after dimensionality reduction using PCA. Class 1 represents the generated sequences, while classes 2 and 3 represent sequences from the training set. b: Distribution of sequence lengths for sequences generated by the diffusion model compared to the training set. The x-axis represents the length of the sequences, and the y-axis represents the frequency. The green color represents the generated sequences, while the other two colors represent sequences from the training set. c: Distribution of the number of mutation sites for sequences generated by the diffusion model compared to the training set. The x-axis represents the number of mutation sites, and the y-axis represents the frequency. The green color represents the generated sequences, while the other two colors represent sequences from the training set. d: Distribution of different lengths of consecutive insertions generated by the diffusion model. The x-axis represents the length of consecutive insertions, and the y-axis represents the proportion of samples. e: Distribution of the number of clusters for sequences generated by the diffusion model and the CNN model. The x-axis represents the clustering radius (sequences with a difference in mutation count within this radius are considered in the same cluster), and the y-axis represents the number of clusters. f: Proportion of viable samples for sequences generated by the diffusion model at different numbers of mutation sites. The x-axis represents different numbers of mutation sites, and the y-axis represents the proportion of viable samples. Performance of the diffusion model-generated sequences when transferred to the AAV9 serotype: Previously, the only available approach for sequence design based on a specific serotype of AAV capsid, where no known viable mutant sequences exist, was random mutation design. However, 5 \fprevious findings on AAV2 [13] have revealed that the proportion of viable samples decreases significantly, reaching nearly zero, when the number of mutations exceeds five. Due to the high similarity in capsid sequence between AAV9 and AAV2, we aimed to test the effectiveness of transferring sequences generated by a model trained on AAV2 data to the wild-type AAV9 at corresponding positions. The experimental results shown in Fig. 2a indicate a significant increase in the number of mutations when the sequences generated from the corresponding region of AAV2 were transferred to the corresponding region of AAV9, as compared to the wild-type AAV9 sequence. Fig. 2b demonstrates that the proportion of generated sequences that remained viable in AAV9 was approximately 50% for mutation numbers ranging from 9 to 10. Conversely, when employing random mutation methods (as referenced from dyno data) for sequence mutation in AAV9 without any viability labeling data, the proportion of viable samples was close to zero at a mutation number of 9. Additional detailed information regarding the viability proportions can be found in Table S2. These findings indicate the potential utilization of existing data from other serotypes to create diverse models for sequence design in future serotype capsid design, instead of solely relying on random mutation approaches. Figure 2: a: Proportion of samples generated by the diffusion model at different numbers of mutation sites, where blue represents AAV2 and red represents AAV9. b: Proportion of viable samples for sequences generated by the diffusion model at different numbers of mutation sites. The x-axis represents different numbers of mutation sites, and the y-axis represents the proportion of viable samples. Analyzing Mutations in AAV9 Hypervariable Regions. Apart from investigating hypervariable region VIII (HVR VIII), our study focused on exploring the extensively studied regions of HVR IV and HVR V within the AAV9 capsid. These regions have been recognized for their ability to tolerate mutations, and our objective was to gain a comprehensive understanding of their mutational landscape. In particular, our focus was on amino acid residues 448-476, 488-517, and 562-590, where we performed single amino acid mutations within these regions. Furthermore, we introduced random amino acid insertions between adjacent residues. To evaluate the viability of these mutations, we calculated activity values by comparing the frequency of the vector to the frequency of the plasmid, as depicted in Fig. 3a. The red line on the graph signifies the activity value of the wild-type sequence. The analysis indicated that the majority of peak reads were concentrated between 0 and 0.5, implying that sequences within this range were either inviable or demonstrated reduced viability. The enriched capsid sequences exhibited a distribution that resembled a Gaussian curve. Upon comparing the activity levels of HVR IV and HVR V with those of HVR VIII, it became apparent that HVR VIII demonstrated both a higher activity peak value and a broader 6 \frange (1-6 compared to 1-3). The frequency comparisons in Fig. 3b revealed that while HVR V variants exhibited reads in 80% of cases, HVR IV and HVR VIII variants had reads in only 60% of cases. However, HVR IV and HVR VIII variants exhibited a greater number of variants with significantly higher reads compared to HVR V mutants, with HVR VIII mutants demonstrating the highest read counts. The wild-type sequences were indicated as red dots. Consequently, HVR V encompassed a broader range of variability, while HVR IV and HVR VIII variants exhibited the highest read counts. In Fig. 3c, we presented a comprehensive breakdown of fitness scores for the insertion, deletion, and mutation of each amino acid within the selected HVR regions. These scores were calculated based on the logarithm base 2 of the vector frequency divided by the plasmid frequency. The score ranges for HVR VIII, HVR IV, and HVR V mutants were approximately [-5 to 4], [-8 to 2.5], and [-6.5 to 2], respectively. These scores substantiated that HVR VIII comprised a subset of variants with superior fitness scores. Importantly, we identified the regions of mutation and insertion tolerance, specifically spanning amino acid residues 588-591, 448-462, and 488-508. Intriguingly, HVR V demonstrated the most extensive tolerant region, indicating the need for further investigation into more substantial mutations within this region. Fig. 3d illustrated the vector and plasmid frequency at each variant level, unveiling a distinct clustering of variant populations into two clusters, with minimal neutral mutations. Discussion In this study, we employed the diffusion model to generate sequences within region VIII of AAV2 and conducted activity experiments on AAV2 capsids. The results revealed that the generated sequences displayed a viability proportion exceeding 90% within the range of 7 to 20 mutations. This finding highlights the robust capability of our model in generating viable sequences. Additionally, we utilized the diffusion model to generate sequences within region VIII of AAV2 and performed activity experiments on AAV9 capsids. The results unveiled that the generated sequences displayed a viability proportion of approximately 50% when the number of mutations ranged from 9 to 10. This proportion was notably higher than the viability proportion obtained through random mutation-based sequence design in the absence of viable sequences. Traditionally, the experimental process for designing capsids with a higher number of mutation sites involved initial experiments utilizing single-site mutagenesis, followed by rational and random mutagenesis based on the obtained results. This iterative process aimed to generate additional experimental data, ultimately leading to the discovery of a broader range of capsid sequences. Based on the results presented in this paper, our model can be utilized for the design of capsids for different AAV serotypes, obviating the need for random mutation or exhaustive single-site mutagenesis. This expedites the experimental process for AAV capsid design. However, our model has certain limitations. One notable limitation is that the range of mutation counts for the generated sequences is constrained by the range observed in the training set. o overcome this limitation, future improvements can involve pre-training the model on an expanded dataset comprising not only AAV capsid sequences but also sequences from other viruses and even non-viral protein sequences. By enabling the model to generate high-quality protein sequences, we can subsequently perform astute fine-tuning on the existing viable AAV samples, liberating the model from the constraints of the AAV training set and unleashing the capabilities acquired through pre-training. In Fig. 3c, the heat map suggests that the mutant-tolerant region may extend beyond the range of our tests. Expanding the scope of saturated mutagenesis could provide further valuable 7 \fFigure 3: a: Distribution of activity values for single-mutant sequences. The x-axis represents the activity values of the sequences, and the y-axis represents the frequency of sequences within that activity value range. b: Trend of activity values for single-mutant sequences. The xaxis represents each sequence, and the y-axis represents the normalized activity values. c: Enrichment levels of single-mutant sequences at different mutation positions. From top to bottom are the enrichment levels of sequences with single mutations in regions VIII, IV, and V of AAV9. The x-axis represents the mutation positions in the current region. If the mutation position is a float value, it indicates an insertion between two integer positions. The y-axis represents the amino acid type after the mutation, where \u201d-\u201d represents the deletion of the amino acid at the current position. Black dots represent the positions of wild-type sequences in that region. d: Enrichment levels of single-mutant sequences. From top to bottom are the enrichment levels of sequences with single mutations in regions VIII, IV, and V of AAV9. The x-axis represents the frequency of plasmids after sequencing, and the y-axis represents the frequency of viruses after sequencing. insights. Notably, distinct patterns emerged within each HVR domain. In the aforementioned 8 \ftolerance regions, amino acids K, R, and C were found to be unfavorable in HVR VIII, likely due to their large size and potential disruption of the capsid structure. In HVR IV (residues 457-475), direct mutations were better tolerated than insertions, highlighting the importance of residue length and structural rigidity in this region. While we gathered data on multiple mutations for HVR VIII, obtaining similar data for HVR IV and V regions would be beneficial. Additionally, collecting data on double or multiple mutation/insertion/deletion scenarios could unveil synergistic effects on fitness and introduce new factors that impact vector transduction. This study marks a significant advancement in capsid engineering, highlighting the correlation between VP sequence mutants and capsid assembly features through an innovative algorithm. By enabling the investigation of transduction efficacy and specificity predictions, our research offers valuable insights into the field of capsid design, where the transduction function is intricately linked to the structure of the vector capsid, which is determined by the VP sequence. Consequently, it is crucial to establish a robust selection model for data collection and algorithm development to further explore these captivating areas of study. Methods The process of sequence generation using the diffusion model The task entails generating sequences within the mutation region of the AAV2 capsid, specifically targeting region VIII. Initially, we compiled mutation sequences from dyno in this region, forming the training set for our model [12]. The dataset comprised a total of 140,000 data entries, encompassing a range of mutation site numbers from 1 to 28. The capsid sequence consists of multiple amino acids, with each amino acid regarded as a token. Hence, we utilize a discrete diffusion generation model for sequence generation. As depicted in Fig. 4, it presents the implementation diagram of the generation diffusion model [14]. The model consists of two processes: diffusion and denoising. The denoising process can be perceived as a prediction process. Utilizing a maximum length noise sequence, the denoising model trained during the diffusion process progressively eliminates noise from the input sequence. After T steps, the noise sequence is restored to a valid sequence. In the case of a valid sequence, noise is incrementally introduced step by step, producing sequences x1, x2, . . ., ultimately resulting in a fully noisy sequence xT. The purpose of the diffusion process is to aid the neural network in acquiring knowledge of the denoising process. Throughout this process, we possess the actual sequences along with the outcomes of adding noise to them. Consequently, this enables the network to learn a mapping that can restore the original sequence from the noisy counterpart. The complete implementation process is subdivided into four stages: data augmentation, noise addition, model training, and denoising. For more detailed information, please refer to the supplementary materials. Once the model is trained, when presented with a fixed-length noise sequence, it progressively recovers the noise sequence to a meaningful and valid capsid sequence. Generation of AAV Capsid Libraries In the development of AAV capsid libraries, wild-type cap genes were subject to modification through the incorporation of DNA oligonucleotides, the sequences of which are provided in Supplementary Table [Insert Table Number]. The specific synthesis of 84-mer to 108-mer DNA oligonucleotides, encoding peptides of interest, was performed by Twist Bioscience in a chip-based primer pool. Subsequently, these DNA oligonucleotides underwent amplification through PCR, utilizing a high-fidelity DNA polymerase (NEB). The resulting PCR fragments were then ligated into the AAV backbone plasmid. The ligation products were transformed 9 \finto electrocompetent cells (Lucigen) to enhance transformation efficiency. The capsid library plasmids were ultimately prepared using a QIAGEN kit, and the diversity of the capsid library was characterized through next-generation sequencing analysis. AAV Production Assay and Virus Titer Detection To produce viral particles, the plasmid libraries were transfected into 293TN cells. These cells were maintained in a sterile environment within a 5% CO2 incubator at 37\u25e6C. Typically, 293TN cells were cultured in High-Glucose Dulbecco\u2019s Modified Eagle\u2019s Medium (DMEM; Gibco) supplemented with 10% fetal bovine serum (FBS; Gibco) and 1% penicillin/streptomycin (Thermo Fisher). AAV library vectors were produced by transfecting 293TN cells, along with adenovirus helper and AAV Rep-\u25b3Cap plasmids, using FectoVIR (Polyplus). For the transfection process, 293TN cells were seeded in 10cm dishes at a density of 7.2\u00d7106 cells per dish. Following a 72-hour duration, the virus was harvested and subsequently purified utilizing an iodixanol density gradient ultracentrifugation method. The AAV titers were quantified using Taqman-based qPCR. Next-Generation Sequencing The remaining cap gene sequences in the purified pool represent viable mutants suitable for both capsid assembly and genome packaging. To assess this, the purified capsids were subjected to heat denaturation at 98 \u00b0C for 10 minutes. Subsequently, the mutant region of the cap gene was amplified using High Fidelity 2x master mix (NEB) with PCR primer sequences. Illumina sequencing adapters and indices were integrated in a subsequent PCR step. These PCR amplicons were subsequently subjected to sequencing with overlapping paired-end reads employing Illumina NextSeq. Figure 4: The directed graphical model considered in this work" |
| } |
| ] |
| } |