diff --git "a/intro_28K/test_introduction_long_2405.05852v1.json" "b/intro_28K/test_introduction_long_2405.05852v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.05852v1.json" @@ -0,0 +1,110 @@ +{ + "url": "http://arxiv.org/abs/2405.05852v1", + "title": "Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control", + "abstract": "Embodied AI agents require a fine-grained understanding of the physical world\nmediated through visual and language inputs. Such capabilities are difficult to\nlearn solely from task-specific data. This has led to the emergence of\npre-trained vision-language models as a tool for transferring representations\nlearned from internet-scale data to downstream tasks and new domains. However,\ncommonly used contrastively trained representations such as in CLIP have been\nshown to fail at enabling embodied agents to gain a sufficiently fine-grained\nscene understanding -- a capability vital for control. To address this\nshortcoming, we consider representations from pre-trained text-to-image\ndiffusion models, which are explicitly optimized to generate images from text\nprompts and as such, contain text-conditioned representations that reflect\nhighly fine-grained visuo-spatial information. Using pre-trained text-to-image\ndiffusion models, we construct Stable Control Representations which allow\nlearning downstream control policies that generalize to complex, open-ended\nenvironments. We show that policies learned using Stable Control\nRepresentations are competitive with state-of-the-art representation learning\napproaches across a broad range of simulated control settings, encompassing\nchallenging manipulation and navigation tasks. Most notably, we show that\nStable Control Representations enable learning policies that exhibit\nstate-of-the-art performance on OVMM, a difficult open-vocabulary navigation\nbenchmark.", + "authors": "Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, Tim G. J. Rudner", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG", + "cs.RO", + "stat.ML" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "As general-purpose, pre-trained \u201cfoundation\u201d models [Rom+22; Tou+23; Bro+20; Ope23; Liu+23; Ala+22; Che+22] are becoming widely available, a central question in the field of embodied AI has emerged: How can foundation models be used to construct model representations that improve generalization in challenging robotic control tasks [Bro+22; Zit+23; Sha+23]? Robotic control tasks often employ pixel-based visual inputs paired with a language-based goal specification, making vision-language model representations particularly well-suited for this setting. However, while vision-language representations obtained via Contrastive Language-Image Pre-training [CLIP; Rad+21]\u2014a state-of-the-art method\u2014have been suc- cessfully applied to a broad range of computer vision tasks, the use of CLIP representations has been shown to lead to poor downstream performance for robotic control. This short- coming has prompted the development of alternative, control-specific representations for embodied AI [Nai+22; Ma+23] but has left other sources of general-purpose pre-trained vision-language representations\u2014such as text-to-image diffusion models\u2014largely unex- plored for control applications. *Equal Contribution. 1 arXiv:2405.05852v1 [cs.CV] 9 May 2024 \u0003\t\u0006\u0005\u0004\u0002\b\u0001\u0000\u0007 \f\n\u0003\u0003 \u0018\t\u0017\t\u0001\u0014\u0012\u0011\u0010\u0001\t\u000e\u000e\u000f\b\r \u0001\u0005\r!\u0005\u0014#\u000f\u0006\u001b\u001a\t\r 2\u0001\u0005\u000e\u0010\u0014-\u0001\t\u0007\u000f\u001b\u0006\u000f\b\r 97\u00056\t3\u00054 \u00124\u0005\u0000B\u0005\u0006\u000f\b\r\u0014=B\u000f\u0006\t OSQRPINKMLJMQMHGPGSFH X\u00043\t\u0006 \n]\\ [\t\u001b\b\u0007\t\u0001 \n]\\ \u0012\r\u001b\b\u0007\t\u0001 ef9-\u0014f\u0005\r6B\u00056\t\u0014\u0012\r\u001b\b\u0007\t\u0001 \u007f-\u000f\u001b!\u000f\r6\u0014B\u0010\u0014\u0005\u0014!\t\u0006\u0006\u0000\tq \u0088\u0086\u0085\u0087\u0082\u0084 \u0083\u0082\u0081\u0082\u0080 =\u0006\u0005\u0093\u0000\t\u0014[\u000f\u0017\u0017B\u000e\u000f\b\r \f\u0001\u000f6\u000f\r\u0005\u0000\u001497\u00056\t [\t\r\b\u000f\u000e\t\u0007\u001497\u00056\t 0.0 0.2 0.4 0.6 0.8 1.0 Average Norm Success VAE CLIP R3M VC-1 SCR (Ours) SCR-FT (Ours) Overall Representation Comparison Figure 1: Left: Our paper proposes Stable Control Representations, which uses pre-trained text-to- image diffusion models as a source of language-guided visual representations for downstream policy learning. Right: Stable Control Representations enable learning control policies that achieve all-round competitive performance on a wide range of embodied control tasks, including in domains that require open-vocabulary generalization. Empirical results are provided in Section 5. In this paper, we propose Stable Control Representations (SCR): pre-trained vision- language representations from text-to-image diffusion models that can capture both high and low-level details of a scene [Rom+22; Ho+22]. While diffusion representations have seen success in downstream vision-language tasks, for example, in semantic segmenta- tion [Bar+22; Tia+23; Wan+23], they have\u2014to date\u2014not been used for control. We perform a careful empirical analysis in which we deconstruct pre-trained text-to-image dif- fusion model representations to understand the impact of different design decisions. In our empirical investigation, we find that diffusion representations can outperform general- purpose models like CLIP [Rad+21] across a wide variety of embodied control tasks despite not being trained for representation learning. This is the case even for purely vision-based tasks and settings that require task understanding through text prompts. A highlight of our results is the finding that diffusion model representations enable better generalization to unseen object categories in a challenging open-vocabulary navigation benchmark [Yen+23] and provide improved interpretability through attention maps [Tan+23]. Our key contributions are as follows: 1. In Section 4, we introduce a multi-step approach for extracting vision-language rep- resentations for control from text-to-image diffusion models. We show that these representations are capable of capturing both the abstract high-level and fundamental low-level details of a scene, offering an alternative to models trained specifically for representation learning. 2. In Section 5, we evaluate the representation learning capabilities of diffusion models on a broad range of embodied control tasks, ranging from purely vision-based tasks to problems that require an understanding of tasks through text prompts, thereby showcasing the versatility of diffusion model representation. 3. In Section 6, we systematically deconstruct the key features of diffusion model repre- sentations for control, elucidating different aspects of the representation design space, such as the input selection, the aggregation of intermediate features, and the impact of fine-tuning on enhancing performance. We have demonstrated that diffusion models learn versatile representations for control and can help drive progress in embodied AI. The code for our experiments can be accessed at: https://github.com/ykarmesh/stable-control-representations. 2", + "main_content": "We first review prior work on representation learning and diffusion models for control. Representation Learning with Diffusion Models. Diffusion models have received a lot of recent attention as flexible representation learners for computer vision tasks of varying granularity\u2014ranging from key point detection and segmentation [Tia+23; Wan+23] to image classification [YW23; Tra22]. Wang et al. [Wan+23] has shown that intermediate layers of a text-to-image diffusion model encode semantics and depth maps that are recoverable by training probes. These approaches similarly extract representations by considering a moderately noised input, and find that the choice of timestep can vary based on the granularity of prediction required for the task. Yang and Wang [YW23] train a policy to select an optimal diffusion timestep, we simply used a fixed timestep per class of task. Several works [Tia+23; Wan+23; Tan+23] observe that the cross-attention layers that attend over the text and image embeddings encode a lot of the spatial layout associated with an image and therefore focus their method around tuning, post-processing, or extracting information embedded within these layers. Visual Representation Learning for Control. Over the past decade, pre-trained representation learning approaches have been scaled for visual discrimination tasks first, and control tasks more recently. Contrastively pre-trained CLIP [Rad+21] representations were employed for embodied navigation tasks by EmbCLIP [Kha+22]. MAE representations have been used in control tasks by prior works like VC-1 [Maj+23], MVP [Xia+22] and OVRLv2 [Yad+23]. R3M [Nai+22] and Voltron [Kar+23] leverage language supervision to learn visual representations. In contrast, we investigate if powerful text-to-image diffusion models trained for image generation can provide effective representations for control. Diffusion Models for Control. Diffusion models have seen a wide range of uses in control aside from learning representations. These can broadly be categorized into three areas. First, diffusion models have been used as a class of expressive models for learning action distribution for policies [Chi+23; Pea+23; HE+23]; this can help model multimodality and richer action distributions than Gaussians. Second, off-the-shelf diffusion models have been used to augment limited robot demonstration datasets by specifying randomizations for object categories seen in the data through inpainting [KVJ23; Yu+23; Man+22]. Diffusion models trained from scratch have also been shown to be an effective method for data augmentation [Lu+23; Jac+24]. Third, planning can be cast as sequence modeling through diffusion models [Jan+22; Aja+23; Du+23]. 3 Background We briefly review diffusion models and text-conditional image generation, and then describe the control setting we consider in this work. 3.1 Diffusion Models Diffusion models [SD+15; HJA20] are a class of generative models that learn to iteratively reverse a forward noising process and generate samples from a target data distribution p(x0), starting from pure noise. Given p(x0) and a set of noise levels \u03c3t for t = 1,..., T, a denoising function \u03f5\u03b8(xt, t) is trained on the objective \ufffd \ufffd \ufffd \ufffd () \ufffdDM(\u03b8) = \ufffdx0,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \ufffd xt, t))\u22252 2 \ufffd = \ufffdx0,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \ufffd x0 + \u03c3t \u00b7 \u03f5, t))\u22252 2 \ufffd , (3.1) \u03f5 \u223c\ufffd(0,1), t \u223cUnif(1, T), and x0 \u223cp(x0). To generate a sample x0 during \ufffd\u2225 \u2212 \ufffd \u2225 \ufffd \u2225 \u2212 \ufffd \u00b7\u2225 \ufffd where \u03f5 \u223c\ufffd(0,1), t \u223cUnif(1, T), and x0 \u223cp(x0). To generate a sample x0 during inference, we first sample an initial noise vector xT \u223c\ufffd(0,\u03c3T) and then iteratively denoise this sample for t = T,...,1 by sampling from p(xt\u22121|xt), which is a function of \u03f5\u03b8(xt, t). 3 In some settings, we may want to generate samples with a particular property. For example, we may wish to draw samples from a conditional distribution over data points, p(x0|c), where c captures some property of the sample, such as classification label or a text description [Rom+22; Sah+22]. In these settings, we may additionally train with labels to obtain a conditioned denoiser \u03f5\u03b8(xt, t, c) and generate samples using classifier-free guidance [HS21]. 3.2 Latent Diffusion Models Latent diffusion models [Rom+22] reduce the computational cost of applying diffusion models to high-dimensional data by instead diffusing low-dimensional representations of high-dimensional data. Given an encoder E(\u00b7) and decoder D(\u00b7), (3.1) is modified to operate on latent representations, z0 \u02d9 =E(x0), yielding LLDM(\u03b8) = Ex0,c,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \u0000E(x0) + \u03c3t \u00b7 \u03f5, t, c)\u22252 2 \u0003 , (3.2) where \u03f5 \u223cN (0,1), t \u223cUnif(1, T), x0, c \u223cp(x0, c). After generating a denoised latent representation z0, it can be decoded as x0 = D(z0). A popular instantiation of a conditioned latent diffusion model is the text-to-image Stable Diffusion model [SD; Rom+22]. The SD model is trained on the LAION-2B dataset [Sch+22] and operates in the latent space of a pre-trained VQ-VAE image encoder [ERO21]. The model architecture is shown at the top of Figure 1 and is based on a U-Net [RFB15], with the corresponding conditioning text prompts encoded using a CLIP language encoder [Rad+21]. 3.3 Policy Learning for Control We model our environments as Markov Decision Processes (MDP , Sutton and Barto [SB18]), defined as a tuple M = (S ,A , P ,R,\u03b3), where S and A denote the state and action spaces respectively, P(s\u2032|s, a) the transition dynamics, R(s, a) the reward function, and \u03b3 \u2208(0,1) the discount factor. Our goal is to optimize a policy \u03c0(a|s) that maximizes the expected discounted return E\u03c0,P \u0002P\u221e t=0 \u03b3tR(st, at) \u0003 . In this paper, we consider visual control tasks that may be language-conditioned, that is, states are given by s = [simage,stext], where stext specifies the task. We are interested in pretrained vision-language representations capable of encoding the state s as f\u03c6(simage,stext). This encoded state is then supplied to a downstream, task-specific policy network, which is trained to predict the action at. Our evaluation encompasses both supervised learning and reinforcement learning regimes for training the downstream policies. We train agents through behavior cloning on a small set of demonstrations for the few-shot manipulation tasks we study in Section 5.2. For the indoor navigation tasks we study in Sections 5.3 and 5.4, we use a version of the Proximal Policy Optimization [PPO, Sch+17] algorithm for reinforcement learning. 4 Stable Control Representations In this paper, we consider extracting language-guided visual representations from the opensource Stable Diffusion model. We follow a similar protocol as Wang et al. [Wan+23], Traub [Tra22], and Yang and Wang [YW23]: Given an image-text prompt, s = {simage,stext}, associated with a particular task, we use the SD VQ-VAE model as the encoder E(\u00b7) and partially noise the latents z0 \u02d9 =E(simage) to some diffusion timestep t. We then extract representations from the intermediate outputs of the denoiser \u03f5\u03b8(zt, t,stext). This process is illustrated in Figure 2. We refer to the extracted representations as Stable Control Representations (SCR). We will describe the design space for extracting SCR in the remainder of this section. 4 \u000f\r\t\u000b\u0007\u0005\u0004\u0006\u0003\f \u000e\b \r\u0002\u000f\u0001\f\u0000\u001c\u0016\u0017\u0015\u001b\u0013\u0018\u0018\u001a\u0016\u0012\u0011\u0019\u0010\u0014\u0013\u001b \u000f\"!\r\u001d \u000f\"!\r# \u000f\"!\r) \u000f\"!\r/ \u0002\u000e\u0001\r 9\u0000\u000f \u0005 C\u000f!A\r>! PNQJOMKQOMKIHLKGFKEKQMOMDNQ R R R [QMKFGNSOMK [QMKFGNSOMK [QMKFGNSOMK \u0019\u0010\u0012da\u0010d\u0013\u0011_\u0017^\u0013\\\\\u001a\u0012d \u001c\u0019tp\u0011\u0019\u0010\u0012da\u0010d\u0013\u0011_\u0012k\u0016\\\u0013\u001b {\u000b!y\u007f\u000by\f\r\u0080}\u000fz\u0003w \u008d\u008c\u0089\r\u0089! \u000f\u0001\f} \u0097\u000f\u000e\u0007\f\r\u0000\fA\f\u0000\r\u0090\r\r\r\r\r\u008e Figure 2: Extraction of Stable Control Representations from Stable Diffusion. Given an image-text prompt, s = {simage,stext}, we encode and noise the image and feed it into the U-Net together with the language prompt. We may then aggregate features from multiple levels of the downsampling process, as described in Section 4. 4.1 Layer Selection and Aggregation We are interested in evaluating the internal representations from the denoiser network, that is, the U-Net \u03f5\u03b8(\u00b7). The first design choice we consider is which layers of \u03f5\u03b8 to aggregate intermediate outputs from. The U-Net does not have a representational bottleneck, and different layers potentially encode different levels of detail. Trading off size with fidelity, we concatenate the feature maps output from the mid and down-sampling blocks to construct the representation. This results in a representation size comparable to that of the other pretrained models we study in Section 5. This is shown at the bottom of Figure 2 and we ablate this choice in Section 6.1. Since outputs from different layers may have different spatial dimensions, we bilinearly interpolate them so that they are of a common spatial dimension and can be stacked together. We then pass them through a learnable convolutional layer to reduce the channel dimension before feeding them to downstream policies. The method used to spatially aggregate pre-trained representations can significantly affect their efficacy in downstream tasks, as we will discuss in Section 6.4. We use the best-performing spatial aggregation method for all the baselines that we re-train in Section 5. 4.2 Diffusion Timestep Selection Next, we consider the choice of extraction timestep t for the denoising network (shown on the left of Figure 2). Recall that the images we observe in control tasks are un-noised (i.e., corresponding to x0), whereas the SD U-Net expects noised latents, corresponding to zt for t \u2208[0,1000]. The choice of timestep t influences the fidelity of the encoded latents since a higher value means more noising of the inputs. Yang and Wang [YW23] have observed that there are task-dependent optimal timesteps and proposed adaptive selection of t during training, while Xu et al. [Xu+23] have used t = 0 to extract representations from un-noised inputs to do open-vocabulary segmentation. We hypothesize that control tasks that require a detailed spatial scene understanding benefit from fewer diffusion timesteps, corresponding to a later stage in the denoising process. We provide evidence consistent with this hypothesis in Section 6.2. To illustrate the effect of the timestep, we display final denoised images for various t values in different domains in Figure 9. 4.3 Prompt Specification Since text-to-image diffusion models allow conditioning on text, we investigate if we can influence the representations to be more task-specific via this conditioning mechanism. For tasks that come with a text specifier, for example, the sentence \u201cgo to object X\u201d, we simply 5 Input Image Refer Expression \u201cpear\u201d \u201cA\u201d \u201cbeside\u201d \u201cbook\u201d \u201ca\u201d \u201cA\u201d \u201cand\u201d \u201crocket\u201d Input Image OVMM \u201cchair\u201d \u201ca\u201d Figure 3: The Stable Diffusion model allows us to extract word-level cross-attention maps for any given text prompt. We visualize these maps in a robotic manipulation environment and observe that they are accurate at localizing objects in a scene. Since these maps are category agnostic, downstream policies should become robust to unseen objects at test time. encode this string and pass it to the U-Net. However, some tasks are purely vision-based and in these settings, we explore whether constructing reasonable text prompts affects downstream policy learning when using the U-Net\u2019s language-guided visual representations. We present this analysis in Section 6.3. 4.4 Intermediate Attention Map Selection Recent studies [Wan+23; Tan+23] demonstrate that the Stable Diffusion model generates localized attention maps aligned with text during the combined processing of vision and language modalities. Wang et al. [Wan+23] leveraged these word-level attention maps to perform open-domain semantic segmentation. We hypothesize that these maps can also help downstream control policies to generalize to an open vocabulary of object categories by providing helpful intermediate outputs that are category-agnostic. Following Tang et al. [Tan+23], we extract the cross-attention maps between the visual features and the CLIP text embeddings within the U-Net. An example of the word-level attention maps is visualized in Figure 3. We test our hypothesis on an open-domain navigation task in Section 5.4, where we fuse the cross-attention maps with the extracted feature maps from the U-Net. We refer to this variant as SCR-ATTN. 4.5 Fine-Tuning on General Robotics Datasets Finally, we consider fine-tuning strategies to better align the base Stable Diffusion model towards generating representations for control. This serves to bridge the domain gap between the diffusion model\u2019s training data (e.g., LAION images) and robotics datasets\u2019 visual inputs (e.g., egocentric tabletop views in manipulation tasks or indoor settings for navigation). Crucially, we do not use any task-specific data for fine-tuning. Instead, we use a small subset of the collection of datasets used by prior works on representation learning for embodied AI [Maj+23; Xia+22]: we use subsets of the EpicKitchens [Dam+18], Something-Something-v2 [SS-v2; Goy+17], and Bridge-v2 [Wal+23] datasets. We adopt the same text-conditioned generation objective as that of the base model for the fine-tuning phase. As is standard, we fine-tune the denoiser U-Net \u03f5\u03b8 but not the VAE encoder or decoder. Image-text pairs are uniformly sampled from the video-text pairs present in these datasets. A possible limitation of this strategy is that text-video aligned pairs (a sequence of frames that correspond to a single language instruction) may define a many-to-one relation for image-text pairs. However, as we see in experiments in which we compare to the base Stable Diffusion model in Section 5, this simple approach to robotics alignment is useful in most cases. Further details related to fine-tuning are provided in Appendix B.1. We refer to the representations from this fine-tuned model as SCR-FT. 6 5 Empirical Evaluation In this work, we evaluate Stable Control Representations (SCR) on an extensive suite of tasks from 6 benchmarks covering few-shot imitation learning for manipulation in Section 5.2, reinforcement learning-based indoor navigation in Sections 5.3 and 5.4, and owing to space limitations, two tasks related to fine-grained visual prediction in Section 5.5. Together, these tasks allow us to comprehensively evaluate whether our extracted representations can encode both high and low-level semantic understanding of a scene to aid downstream policy learning. We begin this section by listing the common baselines used across tasks, followed by the description of individual task setups and results obtained. 5.1 Baselines We compare SCR and its variants (i.e., SCR-FT and SCR-FT-ATTN) to the following prior work in representation learning for control: 1. R3M [Nai+22] pre-trains a ResNet50 encoder on video-language pairs from the Ego4D dataset using time-contrastive video-language alignment learning. 2. MVP [Xia+22] and VC-1 [Maj+23] both pre-train ViT-B/L models with the masked auto-encoding (MAE) objective on egocentric data from Ego4D, Epic-Kitchens, SS-v2, and ImageNet, with VC-1 additionally pre-training on indoor navigation videos. 3. CLIP [Rad+21] trains text and ViT-based image encoders using contrastive learning on web-scale data. 4. Voltron [Kar+23] is a language-driven representation learning method that involves pre-training a ViT-B using MAE and video-captioning objectives on aligned text-video pairs from SS-v2. 5. SD-VAE [Rom+22] is the base VAE encoder used by SD to encode images into latents. To assess how well the vision-only methods would do on tasks with language specification, we concatenate their visual representations with the CLIP text embeddings of the language prompts. While we are limited by the architecture designs of the released models we are studying, to ensure a more fair comparison we try to match parameter counts as much as we can. We use the ViT-Large (307M parameters) versions of CLIP , MVP , and VC-1 since extracting SCR involves a forward pass through 400M parameters. 5.2 Few-shot Imitation Learning We start by evaluating SCR on commonly studied representation learning benchmarks in few-shot imitation learning. Specifically, our investigation incorporates five commonly studied tasks from Meta-World [Yu+19] (same as CORTEXBENCH [Maj+23]), which includes bin picking, assembly, pick-place, drawer opening, and hammer usage; as well as five tasks from the Franka-Kitchen environments included in the RoboHive suite [Kum+23], which entail tasks such as turning a knob or opening a door. We adhere to the training and evaluation protocols adopted in their respective prior works to ensure our results are directly comparable (detailed further in Appendix C.1). Results. We report the best results of SCR and baselines in Table 1a. On Meta-World, we see that SCR outperforms most prior works, achieving 94.9% success rate. In comparison, VC-1, the visual foundation model for embodied AI and CLIP achieved 92.3 and 90.1% respectively. On Franka-Kitchen, SCR obtains 49.9% success rate, which is much higher than CLIP (36.3%) and again outperforms all other baselines except for R3M. We note that R3M\u2019s sparse representations excel in few-shot manipulation with limited demos but struggle to transfer beyond this setting [Maj+23; Kar+23]. 7 Table 1: Average Success Rate and standard error evaluated across different representations. (a) Meta-World & Franka-Kitchen. Model Meta-World Franka-Kitchen R3M 96.0 \u00b1 1.1 57.6 \u00b1 3.3 CLIP 90.1 \u00b1 3.6 36.3 \u00b1 3.2 VC-1 92.3 \u00b1 2.5 47.5 \u00b1 3.4 Voltron 72.5 \u00b1 5.2 33.5 \u00b1 3.2 SD-VAE 75.5 \u00b1 5.2 43.7 \u00b1 3.1 SCR 94.4 \u00b1 1.9 45.0 \u00b1 3.3 SCR-FT 94.9 \u00b1 2.0 49.9 \u00b1 3.4 (b) ImageNav Model Success R3M 30.6 CLIP-B 52.2 VC-1 70.3 MVP 68.1 SD-VAE 46.6 SCR 73.9 SCR-FT 69.5 (c) OVMM Model Success Oracle 77.6 Detic 36.7 CLIP 38.7 \u00b1 1.7 VC-1 40.6 \u00b1 2.2 SCR 38.7 \u00b1 1.2 SCR-FT 41.9 \u00b1 1.0 SCR-FT-ATTN 43.6 \u00b1 2.1 We see that while the SD-VAE encoder performs competitively on Franka-Kitchen, it achieves a low success rate on Meta-World. This observation allows us to gauge the improved performance of SCR from the base performance gain we may get just from operating in the latent space of this VAE. Additionally, we see that the task-agnostic fine-tuning gives SCR-FT an advantage (4%) over SCR on Franka-Kitchen while making no difference on Meta-World. Note that the other high-performing baselines (R3M and Voltron) have been developed for downstream control usage with training objectives that take temporal information into account, while VC-1 has been trained on a diverse curation of robotics-relevant data. In this context, SCR\u2019s comparable performance shows that generative foundation models hold promise for providing useful representations for control, even with relatively minimal fine-tuning on non-task-specific data. 5.3 Image-Goal Navigation We now assess SCR in more realistic visual environments, surpassing the simple tabletop scenes in manipulation benchmarks. In these complex settings, the representations derived from pre-trained foundational models are particularly effective, benefiting from their large-scale training. We study Image-Goal Navigation (ImageNav), an indoor visual navigation task that evaluates an agent\u2019s ability to navigate to the viewpoint of a provided goal image [Zhu+17]. The position reached by the agent must be within a 1-meter distance from the goal image\u2019s camera position. This requires the ability to differentiate between nearby or similar-looking views within a home environment. This task, along with the semantic object navigation task that we study in Section 5.4, allows for a comprehensive evaluation of a representation\u2019s ability to code both semantic and visual appearance-related features in completely novel evaluation environments. We follow the protocol for the ImageNav task used by Majumdar et al. [Maj+23] and input the pre-trained representations to an LSTM-based policy trained with DD-PPO [Wij+19] for 500 million steps on 16 A40 GPUs (further details in Appendix C.3). Given the large training requirements, we only run SCR-FT and directly compare to the results provided in Majumdar et al. [Maj+23]. Results. We evaluate our agent on 4200 episodes in 14 held-out scenes from the Gibson dataset and report the success rate in Table 1b. We find that SCR outperforms MVP and CLIP (ViT-B), and is almost on par with VC-1 (69.5% vs 70.3%), the SOTA visual representation from prior work. We also see that R3M, the best model for few-shot manipulation from Table 1a performs very poorly (30.6%) in this domain, showing its limited transferability to navigation tasks. 5.4 Open Vocabulary Mobile Manipulation We now shift our focus to evaluating how Stable Diffusion\u2019s web-scale training can enhance policy learning in open-ended domains. We consider the Open Vocabulary Mobile 8 Train Val Figure 4: Sample scenes from the Habitat environments for the ImageNav (left) and OVMM (center) tasks. Instances from training and validation datasets of the OVMM object set are shown on the right. Manipulation (OVMM) benchmark [Yen+23] that requires an agent to find, pick up, and place objects in unfamiliar environments. One of the primary challenges here is locating previously unseen object categories in novel scenes (illustrated in Figure 4 (left)). To manage this complex sparse-reward task, existing solutions [Yen+23] divide the problem into sub-tasks and design modular pipelines that use open-vocabulary object detectors such as Detic [Zho+22]. We study a modified version of the Gaze sub-task (detailed in Appendix C.2), which focuses on locating a specified object category for an abstracted grasping action. The task\u2019s success is measured by the agent\u2019s ability to precisely focus on the target object category. This category is provided as an input to the policy through its CLIP text encoder embedding. The evaluation environments cover both novel instances of object categories seen during policy learning, as well as entirely unseen categories. We compare to VC-1, the best model from Section 5.3 and CLIP , since prior work has studied it for openvocab navigation [Kha+22; Maj+22]. We also incorporate a baseline that trains a policy with ground truth object masks, evaluated using either the ground truth or Detic-generated masks (labeled as Oracle/Detic). Results. Table 1c shows SCR matches the performance of CLIP and SCR-FT surpasses VC-1 by 1.3%, beating CLIP and SCR by 3.2%. Surprisingly, VC-1\u2019s visual representation does better than CLIP\u2019s image encoder representation, given that the downstream policy has to fuse these with the CLIP text embedding of the target object category. Compared to these baselines, we can see the benefit of providing intermediate outputs in the form of textaligned attention maps to the downstream policy (+1.7%). These word-level cross-attention maps simultaneously improve policy performance and also aid explainability, allowing us to diagnose successes and failures. Samples of attention maps overlaid on evaluation episode images can be found in Appendix C. Interestingly, the foundation model representations (CLIP , VC-1, SCR) perform better than Detic. While object detections serve as a category-agnostic output that downstream pickand-place policies can work with, noisy detections can often lead to degraded downstream performance, as we see in this case. Nonetheless, there is still a sizeable gap to \u2018Oracle\u2019 which benefits from ground truth object masks. 5.5 Fine-Grained Visual Prediction In Sections 5.2 to 5.4, our analysis focused on the performance of various representations across an array of control tasks. We now turn our attention to two downstream tasks involving fine-grained visual prediction. The first task, Referring Expressions Grounding, is detailed within this section, while the second task, Grasp Affordance Prediction, is discussed in Appendix A.1. These tasks have been previously examined by Karamcheti et al. [Kar+23] as proxy measures to evaluate the efficacy of representations for control applications. The Referring Expressions Grounding task requires the identification and bounding box prediction of an object in an image based on its textual description. Similar to Karamcheti et al. [Kar+23], we use the OCID-Ref Dataset [Wan+21] for our experiments. We show a 9 The lemon on the rear left of the instant_noodles. Figure 5: Sample from the OCID-Ref dataset used for the Referring Expressions task. Model Average Maximum clutter Medium clutter Minimum clutter CLIP 68.1 60.3 76.6 67.0 R3M 63.3 55.3 68.3 63.3 Voltron 92.5 96.9 91.8 90.2 VC-1 94.6 93.7 96.5 93.7 SD-VAE 94.3 93.2 96.3 93.4 SCR 92.9 91.1 95.9 91.8 SCR-FT 91.8 90.1 94.8 90.8 Table 2: Referring Expression Grounding (Accuracy at threshold IoU of 0.25 with label.). sample image-text pair from the dataset to showcase the complexity of the task in Figure 5. The frozen visual representation is concatenated with a text embedding and passed to a 4-layer MLP , which predicts the bounding box coordinates. We report the bounding box accuracy at a 25% Intersection-over-Union (IoU) threshold across different scene clutter levels for SCR-variants and baselines in Table 2. Results. We see that SCR is tied with Voltron and that VC-1 and SD-VAE perform the best with a 1.5% lead. The better performance of these vision-encoder-only methods highlights that on this task, it is not a challenge for the downstream decoder to learn to associate the visual embeddings with the (CLIP) text encoding of the language specification. Since the training budget is fixed, we observed that some of the runs could potentially improve over extended training. However, we were primarily interested in this task not just to compare the downstream visual prediction performance, but to use it as a testbed for exploring the following two questions: (1) Do the performance differences between the representations we evaluated in Sections 5.2 to 5.4, stem from the absence of fine-grained spatial information encoded within the representations? We refute this claim in Section 6.4, where we present the impact of the representations\u2019 spatial aggregation method on prediction performance. (2) Additionally, we explore to what extent language prompting influences the representations from SCR on language-conditioned tasks in Section 6.3. 6 Deconstructing Stable Control Representations In this section, we deconstruct Stable Control Representations to explain which design choices are most determinative of model robustness and downstream performance. 6.1 Layer Selection We begin our investigation by examining how the performance of SCR is influenced by the selection of layers from which we extract feature maps. We previously chose outputs from the mid and downsampling layers of the U-Net (Figure 2), because their aggregate size closely matches the representation sizes from the ViT-based models (VC-1, MVP , and CLIP). Appendix B.2 details the feature map sizes obtained for all the models we study. Table 3a lists the success rates achieved on the Franka-Kitchen domain when we use different sets of block outputs in SCR. We see that utilizing outputs from multiple layers is instrumental to SCR\u2019s high performance. This finding underscores a broader principle applicable to the design of representations across different models: Leveraging a richer set of features from multi-layer outputs should enhance performance on downstream tasks. However, it is important to acknowledge the practical challenges in applying this strategy to ViT-based models. The high dimensionality of each layer\u2019s patch-wise embeddings (16\u00d716\u00d71024 for ViT-L for images of size 224\u00d7224), may complicate the integration of multi-layer outputs. 10 Table 3: We analyze the impact of varying the denoising timestep, layers selection, and input text prompt for the performance of SCR on the Franka-Kitchen benchmark. We report the mean and standard error over 3 random seeds. (a) Denoising timestep. Timestep Success Rate 0 49.9 \u00b1 3.4 10 48.2 \u00b1 3.1 100 42.0 \u00b1 3.7 110 42.0 \u00b1 3.4 200 35.1 \u00b1 3.2 (b) Layers selection. Layers Success Rate Down[1-3] + Mid 49.9 \u00b1 3.4 Down[1-3] 43.0 \u00b1 3.4 Mid 41.6 \u00b1 3.3 Mid + Up[0] 42.1 \u00b1 3.6 Mid + Up[0-1] 48.1 \u00b1 3.6 (c) Input text prompt. Prompt Type Success Rate None 49.9 \u00b1 3.4 Relevant 49.2 \u00b1 3.5 Irrelevant 48.7 \u00b1 3.3 6.2 Sensitivity to the Noising Timestep Next, we characterize the sensitivity of task performance to the denoising step values chosen during representation extraction on the Franka-Kitchen tasks in Table 3b. We see that the performance across nearby timesteps (0 and 10 or 100 and 110) is similar, and that there is a benefit to doing a coarse grid search up to a reasonable noising level (0 vs 100 vs 200) to get the best value for a given task. 6.3 How is Language Guiding the Representations? Recall that in the OVMM experiments (Section 5.4), we concatenated the target object\u2019s CLIP text embedding to the visual representations before feeding it to the policy. For SCR and SCR-FT, we also provided the category as the text prompt to the U-Net, and additionally extracted the generated cross-attention maps for SCR-FT-ATTN. In this subsection, we seek to more closely understand how the text prompts impact the representations in SCR. We first consider the Franka-Kitchen setup from Section 5.2, which includes manipulation tasks that do not originally come with a language specification. We experiment with providing variations of task-relevant and irrelevant prompts during the representation extraction in SCR. Table 3c shows the downstream policy success rates for irrelevant (\u201can elephant in the jungle\u201d) and relevant (\u201ca Franka robot arm opening a microwave door\u201d) prompts, compared to the default setting of not providing a text prompt We see that providing a prompt does not help with downstream policy performance and may even degrade performance as the prompt gets more irrelevant to the visual context of the input. We now move to the Referring Expressions Grounding task from Section 5.5, which requires grounding language in vision to do bounding box prediction. To study the role of the U-Net in shaping the visual representations guided by the text, we examine different text integration methods to generate SCR representations and compare them to the Voltron baseline in Table 4. We compared the following approaches for providing the task\u2019s text specification to the task decoder (also depicted in Figure 6): (a) No text input: Exclude text prompt from both SCR and the task decoder by passing an empty prompt to the U-Net and using only the resulting SCR output for the decoder. (b) Prompt only: Pass text prompt only to the U-Net. (c) Concat only: Concatenate the CLIP embedding of the text prompt with the visual representation, feeding an empty prompt to the U-Net. (d) Prompt + Concat: Combine \u201cPrompt Only\u201d and \u201cConcat Only\u201d. (e) Only text encoding: Remove visual representations completely and rely only on CLIP text embeddings. Investigating the results of (a) and (b) in Table 4, it is evident that incorporating the text prompt into the U-Net significantly enhances accuracy compared to ignoring the text 11 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0002\r\u000b\u0003\u0002\u0001\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 \u001e\r\u000b\u0004,\u0018\u0014*\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 1 <\r\u000b\u0004,\u0018\u0014*\u0000\u000b\u0013\u0010<\u000b\u0007\u0018\u00102\u0013\u0000\u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 1 2\r\u000b\u0007\u0018\u00102\u0013\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 \u0013\r\u000bF\u0018\u000b\u0003\u0002\u0001\u0000\u000b\u0005\u0010*\u001a\u0000Figure 6: Illustration of different approaches to providing relevant visionlanguage inputs to a downstream task-decoder. Configuration Score (a) No text input 14.8 (b) Prompt only 82.7 (c) Concat only 92.2 (d) Prompt + Concat 92.9 (e) Only text encoding 37.5 Table 4: Ablating text input to SCR on referring expressions task. altogether. The difference in scores between (b) and (c) indicates that directly providing text embeddings to the decoder improves performance, suggesting that certain crucial aspects of object localization are not fully captured by the representation alone. Comparing (c) to (d), we see that with concatenated text embeddings, further modulation of the visual representations does not provide significant benefits. Finally, the significant decrease in the score for (e) reveals the extent to which the task relies on text-based guesswork. These findings align with both intuition and recent research on controllable generation with diffusion models [Zha+23] that highlights the challenges associated with using long-form text guidance. There are ongoing research efforts, focused on training models with more detailed image descriptions or leveraging approaches to encode and integrate sub-phrases of long texts, that seek to address these challenges. 6.4 The Effect of Spatial Aggregation In this study, we refine the approach for extracting representations by integrating a convolutional layer that downsamples the spatial grid of pre-trained representations. This adjustment, referred to as a \u201ccompression layer\u201d by Yadav et al. [Yad+23], aims to reduce the high channel dimension of pre-trained model outputs without losing spatial details, facilitating more effective input processing by downstream task-specific decoders. We explore the effect of spatial aggregation methods by comparing the convolutional downsampling layer method to multi-headed attention pooling (MAP) used for CLIP embeddings in Karamcheti et al. [Kar+23]. We find that using a compression layer significantly improves performance on the fine-grained visual prediction tasks described in Section 5.5 as reported in Table 5 (columns 3-4). This result challenges the conjecture made in prior work that CLIP representations are limited in their ability to provide accurate low-level spatial information [Kar+23] and emphasizes the critical role of appropriate representation aggregation. Table 5: We ablate the spatial aggregation method for VC-1 and CLIP . On the fine-grained visual prediction tasks, we compare the average precision between using multi-head attention pooling (MAP) and the compression layer. On the Meta-World & Franka-Kitchen tasks, we compare the average success rates (\u00b1 one standard error) between the CLS token and compression layer embeddings. Model Aggregation Refer Exp. Grasp Affordance Meta-World Franka-Kitchen Method Grounding Prediction VC-1 MAP/CLS 93.2 24.7 88.8 \u00b1 2.2 52.0 \u00b1 3.4 VC-1 Compression 94.6 83.9 92.3 \u00b1 2.5 47.5 \u00b1 3.4 CLIP MAP/CLS 68.1 60.3 88.8 \u00b1 3.9 35.3 \u00b1 3.4 CLIP Compression 94.3 72.9 90.1 \u00b1 3.6 36.3 \u00b1 3.2 12 Building on this result, we assess whether better spatial aggregation can improve the performance of CLIP representations on downstream control tasks. We present these results in Table 5 (columns 5-6) for VC-1 and CLIP on the MuJoCo tasks. We see that the compression layer often outperforms the use of CLS token embeddings (by 1-2%), but CLIP representations still fail to match the best-performing models. This result provides evidence that the underperformance of CLIP representations on control tasks is unlikely due to a lack of sufficiently fine-grained visual information. Finally, we note that the compression layer aggregation technique was used for all baselines in Tables 1b and 1c to ensure a strong baseline comparison. We recommend that future studies adopt this methodology to enable a fairer comparison of representations. 7 Discussion In Section 6, we deconstructed Stable Control Representations and highlighted techniques used in our approach can be applied to other foundational control models. Our analysis in Sections 6.1 and 6.4 revealed that using multi-layer features and appropriate spatial aggregation significantly affects performance, and overlooking these factors can lead to misleading conclusions about the capabilities of previously used representations. Next, our investigation into how language shapes diffusion model representations uncovered nuanced results and showed that text influence on representations does not consistently increase downstream utility. This is particularly evident in tasks where text specification is not required and where training and test environments are congruent, minimizing the need for semantic generalization. In contrast, tasks like referring expressions grounding demonstrate the necessity of direct access to text embeddings for accurate object localization, even when representations are modulated to considerable success. For the OVMM task, we identified a scenario where multimodal alignment is essential and proposed a method to explicitly utilize the latent knowledge of the Stable Diffusion model through text-aligned attention maps, which is not straightforward to do for other multimodal models. Future research could design methods to derive precise text-associated attribution maps for other models. Finally, we contrasted the simplicity of fine-tuning diffusion models with that of the contrastive learning objective required to fine-tune CLIP representations. The former only requires image-text or image-only samples for conditional and unconditional generation objectives, respectively, whereas the latter requires a sophisticated negative label sampling pipeline along with large batch sizes to prevent the model from collapsing to a degenerate solution [Rad+21]. We demonstrated this phenomenon empirically on the Franka-Kitchen environment by fine-tuning CLIP similarly to SCR-FT in Appendix A.2. 8 Conclusion In this paper, we proposed Stable Control Representations, a method for leveraging representations of general-purpose, pre-trained diffusion models for control. We showed that using representations extracted from text-to-image diffusion models for policy learning can improve generalization across a wide range of tasks including manipulation, image-goal and object-goal based navigation, grasp point prediction, and referring expressions grounding. We also demonstrated the interpretability benefits of incorporating attention maps extracted from pre-trained text-to-image diffusion models, which we showed can improve performance and help identify downstream failures of the policy during development. Finally, we discussed ways in which the insights presented in this paper, for example, regarding feature aggregation and fine-tuning, may be applicable to other foundation models used for control. We hope that Stable Control Representations will help advance data-efficient control and enable open-vocabulary generalization in challenging control domains as the capabilities of diffusion models continue to improve. 13 Acknowledgments GG is funded by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (EP/S024050/1) and Toyota Europe. We gratefully acknowledge donations of computing resources by the Alan Turing Institute. The Georgia Tech effort was supported in part by ONR YIP and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.15766v1", + "title": "Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations", + "abstract": "Bayesian flow networks (BFNs) iteratively refine the parameters, instead of\nthe samples in diffusion models (DMs), of distributions at various noise levels\nthrough Bayesian inference. Owing to its differentiable nature, BFNs are\npromising in modeling both continuous and discrete data, while simultaneously\nmaintaining fast sampling capabilities. This paper aims to understand and\nenhance BFNs by connecting them with DMs through stochastic differential\nequations (SDEs). We identify the linear SDEs corresponding to the\nnoise-addition processes in BFNs, demonstrate that BFN's regression losses are\naligned with denoise score matching, and validate the sampler in BFN as a\nfirst-order solver for the respective reverse-time SDE. Based on these findings\nand existing recipes of fast sampling in DMs, we propose specialized solvers\nfor BFNs that markedly surpass the original BFN sampler in terms of sample\nquality with a limited number of function evaluations (e.g., 10) on both image\nand text datasets. Notably, our best sampler achieves an increase in speed of\n5~20 times for free. Our code is available at\nhttps://github.com/ML-GSAI/BFN-Solver.", + "authors": "Kaiwen Xue, Yuhao Zhou, Shen Nie, Xu Min, Xiaolu Zhang, Jun Zhou, Chongxuan Li", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Deep generative models (DGMs) are effective in captur- ing complex data distributions and producing realistic sam- ples, substantially influencing fields such as computer vi- sion (Rombach et al., 2022; Ramesh et al., 2022; Podell et al., 2023) and natural language processing (Brown et al., 2020; OpenAI, 2023). The fundamental challenge in DGMs *Equal contribution 1Gaoling School of AI, Renmin Univer- sity of China, Beijing, China 2Department of Computer Science and Technology, Tsinghua University, Beijing, China 3Ant Group, Hangzhou, China. Correspondence to: Chongxuan Li . Preprint. is to represent a flexible probability distribution that facil- itates effective parameter learning and efficient inference simultaneously, greatly depending on the data (or modality). Autoregressive models (ARMs) (OpenAI, 2023), for exam- ple, excel in modeling sequential and discrete data (e.g., text) but face limitations in the inference speed, which is proportional to the number of variables. Diffusion mod- els (DMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021), on the other hand, better balance genera- tion quality and efficiency with a coarse-to-fine approach. Although considered state-of-the-art in image generation, DMs encounter challenges in handling discrete variables, where score matching algorithms (Hyv\u00a8 arinen, 2005; Vin- cent, 2011) do not directly apply. A new class of generative models, Bayesian Flow Networks (BFNs) (Graves et al., 2023), has been developed recently to overcome these challenges. While inspired by DMs, BFNs distinguish themselves by focusing on iteratively refining the parameters (instead of the samples) of a distribution set at different noise levels through Bayesian inference (see Sec. 3 for more details). This strategy enables BFNs to facilitate fast sampling and maintain a continuous nature, even when processing discrete data. With carefully designed regression losses, BFNs have shown considerable promise in both image and language modeling. Notably, BFN is primarily developed based on a message-sending process with minimum communication length, and the exact relation between BFNs and DMs remains unclear. As summarized in Table 1, this paper primarily contributes by unifying BFNs and DMs through stochastic differen- tial equations (SDEs), a pivotal step in understanding their relationship and enhancing BFNs. Initially, by slightly trun- cating the time, we identify linear SDEs corresponding to the noise-adding processes in BFN on both continuous (see Sec. 4) and discrete data (see Sec. 5) and derive the reverse- time SDEs for sampling. Note that the SDEs for discrete data operate on a set of latent variables, which the original BFN formulation marginalizes out, rather than distribution parameters. Furthermore, we demonstrate that, especially on discrete data, BFN\u2019s regression losses align with denois- ing score matching (DSM) (Vincent, 2011) w.r.t. variables 1 arXiv:2404.15766v1 [cs.LG] 24 Apr 2024 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations Table 1. Technical contributions of the paper include the theory on unifying BFN and DM (in red) and new samplers for BFN inspired by the theory (in blue). \u201cSDE-solver1\u201d means a first-order solver for the corresponding SDE and \u201cApprox.\u201d is a shorthand for \u201cApproximate\u201d. NOISE-ADDING PROCESS LOSS FUNCTION ORIGINAL SAMPLER NEW SAMPLERS BFN ON CON- CORRESPONDING SDE EQUIVALENT TO DSM SDE-SOLVER1 BFN-SOLVERS TINUOUS DATA THEOREM 4.1 TRIVIAL PROPOSITION 4.2 ALGOS. 1-3 IN APPENDIX BFN ON DIS- CORRESPONDING SDE EQUIVALENT TO DSM APPROX. SDE-SOLVER1 BFN-SOLVERS CREATE DATA THEOREM 5.1 THEOREM 5.2 PROPOSITION 5.3 ALGOS. 4-7 IN APPENDIX in the corresponding SDE, positioning the trained networks to naturally parameterize the reverse-time SDEs. Finally, the original BFN sampler is proven as an (approximate) first-order solver for the corresponding reverse-time SDE. The explicit connection between BFNs and DMs brings immediate benefits, particularly in applying fast sampling methods (Lu et al., 2022b;c) from DMs to BFNs. We de- rive the corresponding probability flow ordinary differential equations (ODEs) (Song et al., 2021) for BFNs on both con- tinuous and discrete data. We propose high-order solvers (named BFN-Solvers) tailored to BFNs\u2019 special (e.g., semi- linear) structure, for both SDEs and ODEs. Empirically, us- ing the same pre-trained model, our best solver significantly outperforms the original BFN sampler with a few (e.g., 10) number of function evaluations (NFE) under sample quality on both the CIFAR10 and text8 datasets, achieving a 5 \u223c20 times increase in speed for free (see Sec. 6 for details). We believe our discovery offers a rigorous and systematic perspective for analyzing and improving the training and in- ference processes of BFNs, grounded in the existing results of DMs, and may inspire future work as detailed in Sec. 7.", + "main_content": "Score-baesd DMs. Built upon the score matching algorithms (Hyv\u00a8 arinen, 2005; Vincent, 2011; Song et al., 2019; Pang et al., 2020), DMs (Sohl-Dickstein et al., 2015; Ho rithms (Hyv arinen, 2005; Vincent, 2011; Song et al., 2019; Pang et al., 2020), DMs (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021) are currently SOTA to model continuous variables (Dhariwal & Nichol, 2021; Chen et al., 2020; Kong et al., 2020; Ho et al., 2022; Singer et al., 2022; Poole et al., 2022; Wang et al., 2023). In particular, largescale text-to-image models (Rombach et al., 2022; Ramesh et al., 2022; Saharia et al., 2022; Bao et al., 2023; Balaji et al., 2023; Xue et al., 2023b; Podell et al., 2023) have made remarkable progress and attracted significant attention. Solvers for DMs. Since Song et al. (2021) introduced the SDE and probability flow ODE formulation of DMs, there have been extensive solvers for both SDE (Ho et al., 2020; Song et al., 2021; Karras et al., 2022; Lu et al., 2022c; Bao et al., 2022b;a; Jolicoeur-Martineau et al., 2021; Xue et al., 2023a; Guo et al., 2023) and ODE (Song et al., 2020; Liu et al., 2022; Lu et al., 2022b;c; Zhang et al., 2022; Karras et al., 2022; Zhao et al., 2023) to improve the sampling process. In particular, ODE samplers are proven effective with limited NFEs while SDE samplers are robust to prior mismatch (Lu et al., 2022a; Nie et al., 2023) and perform better in a sufficient number of NFEs (Lu et al., 2022c). Discrete DMs. Several DMs have been proposed to model discrete data with discrete states (Sohl-Dickstein et al., 2015; Hoogeboom et al., 2021; Austin et al., 2023), depending on a probability transition matrix. It is nontrivial to leverage the features associated with continuous-state DMs, such as guidance and ODE fast sampling. Efforts have been made to define the score in the discrete state (Lou et al., 2023; Meng et al., 2023; Campbell et al., 2022; Sun et al., 2023); however, this remains a challenging endeavor. Other works (Chen et al., 2022; Dieleman et al., 2022; Li et al., 2022) have attempted to identify a continuous equivalent for discrete data and apply continuous DMs, but this may result in information loss during the transformation and greatly rely on the noise schedule (Ye et al., 2023). Mahabadi et al. (2023) defines a continuous-time diffusion process on continuous latent variables but is trained with cross-entropy loss rather than regression loss. Several studies (Richemond et al., 2022; Lou & Ermon, 2023) have attempted to establish the diffusion process using SDEs on discrete data. Specifically, Richemond et al. (2022) introduced an SDE defined on the probability simplex, but it suffers from intractability in high-dimensional space. Lou & Ermon (2023) proposed a diffusion SDE with an additional boundary constraint, which also increases the complexity of discretization (e.g., requiring thresholding in SDE). In comparison, this paper reveals that BFNs applied to discrete data solve a linear SDE and are trained using DSM, which aligns seamlessly with continuous DMs. Consequently, without changing the discrete data, BFNs are significantly simpler and more scalable and efficient than the related work, leveraging advancements in continuous DMs. 3. Background In this section, we present the elementary notations and background of DMs and BFNs. 2 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations 3.1. Elementary Notations We use lowercase letters (e.g., t) and boldface lowercase letters (e.g., x) to denote scalars and vectors respectively. Variables indexed by uncountable indices are denoted in the form of functions, (e.g., \u03b2(t) and \u00b5(t)). Given finite indices (e.g. {ti}M i=1), the corresponding variables are denoted with subscripts (e.g., \u00b5i). 3.2. Score-based DMs Score-based DMs (Kingma et al., 2021) characterize the data distribution through a diffusion process {x(t) \u223c N(\u03b1(t)x, \u03c32(t)I)} indexed by a continuous-time variable t \u2208[0, T] according to an It\u02c6 o SDE as follows dx = f(t)x dt + g(t) dw, (1) where w is the standard Wiener process, and f(t) = d log \u03b1(t) dt and g(t) = d\u03c32(t) dt \u22121 2 d log \u03b1(t) dt \u03c32(t) are the drift and diffusion coefficients respectively. For instance, denoising diffusion probabilistic models (Ho et al., 2020) consider a process given by the following SDE: dx = \u22121 2\u03b2(t)x dt + p \u03b2(t) dw, (2) where 0 < \u03b2(t) < 1. Let pt(x) denote the marginal density of x(t). The generative process of score-based DMs is given by a reverse-time SDE (Song et al., 2021; Anderson, 1982) dx = [f(t)x \u2212g(t)2\u2207x log pt(x)] dt + g(t) d \u00af w, (3) where \u00af w is the time-reversed Wiener process. Then the score is parameterized with a time-dependent score-based model \u02c6 s(x, t) and trained with the following denoising score matching loss (Vincent, 2011) LDSM = E x(t),x(0)[\u2225\u02c6 s(x(t), t) \u2212\u2207x log p0t(x(t)|x(0))\u22252 2], (4) where the conditional distribution p0t(x|x(0)) is designed as a Gaussian kernel with a closed form score function \u2207x log p0t(x|x(0)). For fast sampling, Song et al. (2021) introduce the corresponding probability flow ODE of the reverse SDE in Eq. (3) as follows dx = \u0014 f(t)x \u22121 2g(t)2\u2207x log pt(x) \u0015 dt, (5) which produces the same data distribution as the corresponding SDE with infinitesimally small stepsize and enjoys a smaller discretization error with a large stepsize due to its deterministic nature (Kloeden et al., 1992). To solve the ODE in Eq. (5) efficiently, DPM-Solvers (Lu et al., 2022b;c) explicitly leverage the semi-linear property of Eq. (5) and further simplify it to an exponentially weighted integral of the neural network by applying change-of-variable. Consequently, the exact solution of ODE is given by x(t) = \u03b1(t) \u03b1(s)x(s) \u2212\u03b1(t) Z \u03bb(t) \u03bb(s) e\u2212\u03bb\u02c6 e\u03b8(\u02c6 x(\u03bb), \u03bb) d\u03bb, (6) where \u03bb(t) = log(\u03b1(t)/\u03c3(t)) is the half of the log signalnoise ratio. DPM-Solver solves Eq. (6) numerically leading to a small discretization error. Taking DPM-Solver1 as an example, given time steps {ti}n i=1 and initial value x0, a sequence {xi}n i=1 can be solved iteratively as follows: xi = \u03b1(ti) \u03b1(ti\u22121)xi\u22121\u2212\u03c3(ti)(ehi \u22121)\u02c6 \u03f5\u03b8(xi\u22121, ti\u22121)+O(h2 i ), (7) where hi = \u03bb(ti) \u2212\u03bb(ti\u22121). Empirically, DPM-Solver achieves excellent results with a limited number of NFEs and is widely adopted. 3.3. Bayesian Flow Networks Due to the space limit, we briefly present the motivation and formulation of BFNs (Graves et al., 2023) here and please refer to the original paper for more details. Inspired by DMs, BFNs iteratively refine the parameters of a distribution set at different noise levels through Bayesian inference. This strategy enables BFNs to facilitate fast sampling and be differentiable on both continuous and discrete data. For D-dimensional continuous data1 x \u2208RD, a continuoustime BFN operates on parameters of a set of Gaussian distributions (of noisy data with different noise levels) with means {\u00b5(t)}1 t=0 and covariance matrices {\u03c1(t)I}1 t=0. Equivalently, \u00b5(t) can also be regarded as a noisy version of x by injecting a Gaussian noise and follows the distribution qF (\u00b5(t)|x, \u03b3(t)) = N(\u03b3(t)x, \u03b3(t)(1 \u2212\u03b3(t))I), (8) where \u03b3(t) = 1 \u2212\u03c32(1\u2212t) 1 is a schedule function2 and \u03c31 \u2208 (0, 1) is a hyperparameter. \u03c1(t) is has a closed form as \u03c1(t) = 1 1\u2212\u03b3(t). Similar to DMs, a BFN on continuous data trains a neural network \u02c6 \u03f5(\u00b5(t), t) to predict the injected Gaussian noise \u03f5 by minimizing the following loss: E qF (\u00b5(t)|x,\u03b3(t)),t\u223cU(0,1) \u2212ln \u03c31 \u03c32t 1 \u2225\u03f5 \u2212\u02c6 \u03f5(\u00b5(t), t)\u22252. (9) Given time steps {ti}n i=0 and i.i.d. noises {ui}n i=0 \u223c N(0, I), the BFN sampler (Graves et al., 2023) iterates 1We say x is a continuous data if its distribution has density w.r.t. the Lebesgue measure. 2For a clear alignment with DMs, we adopt a reverse time notation in this paper as originally used by Graves et al. (2023). Specifically, the schedule \u03b3(t) in our paper is equivalent to \u03b3(1\u2212t) in Graves et al. (2023). We retain the other notational conventions for ease of reading, which do not affect our derivations. 3 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations as follows. \u00b5i = \u2212 \u03b3(ti)\u2212\u03b3(ti\u22121) p \u03b3(ti\u22121)(1\u2212\u03b3(ti\u22121)) \u02c6 \u03f5(\u00b5i\u22121, ti\u22121)+ \u03b3(ti) \u03b3(ti\u22121)\u00b5i\u22121 + s 1\u2212\u03b3(ti) 1\u2212\u03b3(ti\u22121)(\u03b3(ti)\u2212\u03b3(ti\u22121))ui. (10) On D-dimensional discrete data x \u2208{1, \u00b7 \u00b7 \u00b7 , K}D, where K is the number of classes, the BFN operates on parameters \u03b8(t) of the multivariate categorical distributions of noisy data. The distribution of \u03b8 is qF (\u03b8(t)|x, \u03b2(t)) = E q(z(t)|x,\u03b2(t)) \u03b4(\u03b8(t) \u2212softmax(z(t))), where \u03b4(\u00b7) is the Dirac distribution, z(t) is a set of latent variables with Gaussian marginal distributions as q(z(t)|x, \u03b2(t)) = N(\u03b2(t)wx, K\u03b2(t)I), (11) and wx := Kex \u22121, ex := {ex(1), \u00b7 \u00b7 \u00b7 , ex(D)} \u2208RKD where ej is the one-hot vector defined by (ej)k = \u03b4xjk and 1 is a vector of length KD filled with ones. \u03b2(t) = (1 \u2212t)2\u03b21 is a schedule function with a hyperparameter \u03b21 > 0. A BFN on discrete data trains a neural network \u02c6 e(\u03b8(t), t) that predicts the data in a one-hot form given noisy inputs using the following regression loss L\u221e(x)= E qF (\u03b8|x,t),t\u223cU(0,1) K\u03b21t\u2225ex \u2212\u02c6 e(\u03b8(t), t)\u22252. (12) Let {ui}n i=0 \u223cN(0, I) be independent and use \u02c6 es(z(t), t) as a shorthand for \u02c6 e(softmax(z(t)), t). The sampling rule of BFN (Graves et al., 2023) can be written as follows ek \u223cCat(\u02c6 es(zi\u22121, ti\u22121)), (13) zi = zi\u22121 + \u03b1i(Kek \u22121) + p K\u03b1iui, (14) where \u03b1i = \u03b2(ti) \u2212\u03b2(ti\u22121) and Cat represents the one-hot categorical distribution.3 Based on the formulation, BFNs have shown considerable promise in both image and language modeling. Although inspired by DMs, and the exact relation between BFNs and DMs remains unclear. To this end, this paper unifies them through stochastic differential equations (SDEs) for understanding and accelerating BFNs on both continuous data (see Sec. 4) and discrete data (see Sec. 5). 4. Continuous-time BFN on Continuous Data This section bridges BFNs on continuous data with DMs by establishing a linear SDE for noise modeling in BFN 3Originally, Graves et al. (2023) obtain samples through \u03b8(t), while we present the equivalent form in terms of z(t) for convenience. x ... \u2192\uf0a5 n x Bayesian Flow Network on Continuous Data SDE on parameters of distribution \u03bc(t) \u03bc \u03bc \u03bc \uf0eb \uf0fb \uf0ea \uf0fa \u2212 \uf0ea \uf0fa = + \uf0e9 \uf0f9 \uf067 \uf067 \uf065 t t F t t t t t G t 2 ( )(1 ( )) d ( ) ( ) ( ( ), ) d \u02c6 ( )2 w \u03bc \u03bc = + F t t G t d ( ) d ( )d \u03bc \u2212\uf068 (1 ) \u03bc0 \u03bc(0) \u03bc(0) \u03bc1 \u03bc2 \u03bcn \u03bc \u2212\uf068 (1 ) = \u2212\uf068 t 1 = t 0 Figure 1. Illustration of BFN on continuous data and the corresponding SDEs. The SDEs are defined w.r.t. \u00b5 on time [0, 1 \u2212\u03b7]. (Sec. 4.1), aligning training objectives with DSM (Sec. 4.2), and validating the sampler as discretization of the reversetime SDE (Sec. 4.3). Further, fast samplers are developed based on the recipe in DMs in Sec. 4.4. 4.1. Formulating BFN on Continuous Data as SDEs As illustrated in Fig. 1, we establish that the (truncated) noise-adding process of the continuous-time BFN on continuous data in Eq. (8) uniquely solves a linear SDE, summarized as follows. Theorem 4.1 (Proof in Appendix A.1). Let \u03b7 > 0 be an arbitrarily small constant. The BFN in Eq. (8) at time [0, 1 \u2212\u03b7] is the unique solution of the following linear SDE: d\u00b5 = F(t)\u00b5 dt + G(t) dw. (15) Here w is a standard Wiener process and F(t) = \u03b3\u2032(t) \u03b3(t) = 2 \u03c32(1\u2212t) 1 1 \u2212\u03c32(1\u2212t) 1 ln \u03c31, (16) G(t)2 = \u2212\u03b3\u2032(t) = \u22122\u03c32(1\u2212t) 1 ln \u03c31, (17) where \u03c31 \u2208(0, 1) is the hyperparameter defined in Eq. (8). The time t is truncated by 1 \u2212\u03b7 in Theorem 4.1 for two reasons. On one hand, the reverse-time SDE derived later (see Eq. (20)) is ill-defined at t = 1 since the distribution of \u00b5 collapses to a Dirac distribution whose score tends to infinity. On the other hand, it is convenient to satisfy certain regularity conditions for the uniqueness of the solution in Theorem 4.1, as detailed in the proof. As \u03b7 is small (e.g., 10\u22123 \u223c10\u22125) in our implementation, the effect of truncation is negligible. The exact distribution of \u00b5(1 \u2212\u03b7) is 4 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations unknown, we approximate it by an isotropic Gaussian with a zero mean and small variance (see details in Sec. 6). Here a linear SDE also applies to the latent variable z, a linear transformation of \u00b5 in Eq. (8). The choice of \u00b5 in Theorem 4.1 aligns with the sampling process in BFN (Graves et al., 2023), facilitating a later analysis in Sec. 4.3. The finding in Theorem 4.1 directly connects to DMs (Song et al., 2021; Kingma et al., 2021), which are formulated as an SDE in Eq. (1) with a different noise schedule. We believe this may inspire new classes of BFNs and leave a systematic comparison of the schedules for future work. Similar to Eq. (3), the linear SDE in Eq. (15) has an associated reverse-time SDE (Anderson, 1982; Song et al., 2021) in [0, 1 \u2212\u03b7] for generative modeling: d\u00b5 = [F(t)\u00b5 \u2212G(t)2\u2207\u00b5 log pt(\u00b5)] dt + G(t) d \u00af w, (18) where \u2207\u00b5 log pt(\u00b5) is the (time-conditional) score function to be estimated and \u00af w is the time-reversed Wiener process. 4.2. Training as Parameterizing the Reverse-time SDE The continuous-time BFN on continuous data trains a neural network to optimize the mean square error in Eq. (9), which directly aligns with the widely employed DSM loss in Eq. (4). In other words, BFN equivalently parameterizes the reverse-time SDE in Eq. (18) by estimating the time-conditional score function as \u02c6 s(\u00b5(t), t) = \u2212 1 p \u03b3(t)(1 \u2212\u03b3(t)) \u02c6 \u03f5(\u00b5(t), t), (19) where \u02c6 s(\u00b5(t), t) and \u02c6 \u03f5(\u00b5(t), t) denote the estimate of the score function and the network trained by BFN, respectively, and \u03b3(t) follows Eq. (8). 4.3. Sampling as Discretizing the Reverse-time SDE Plugging Eq. (19) into Eq. (18), we get a parameterized reverse-time SDE in [0, 1 \u2212\u03b7] for sampling as follows d\u00b5= \" F(t)\u00b5(t)+ G(t)2\u02c6 \u03f5(\u00b5(t), t) p \u03b3(t)(1 \u2212\u03b3(t)) # dt+G(t) d \u00af w, (20) which is ill-defined at t = 1 because limt\u21921 \u03b3(t) = 1. Interestingly, even without an explicit SDE formulation, the sampler proposed in the original BFN paper discretizes the reverse-time SDE, as characterized in the following Proposition 4.2. Proposition 4.2 (Proof in Appendix A.2). The BFN sampler in Eq. (10) is a first-order discretization of an equivalent form of the parameterized reverse-time SDE in Eq. (20). 4.4. Probability Flow ODE and Faster Sampling Establishing an explicit connection between BFNs and DMs through SDEs yields an immediate and significant benefit: the fast sampling recipe from DMs directly applies to BFN. Formally, according to Eq. (5), we obtain the following equivalent probability flow ODE of the parameterized reverse-time SDE of Eq. (20): d\u00b5= \" F(t)\u00b5(t)+ G(t)2 2 p \u03b3(t)(1 \u2212\u03b3(t)) \u02c6 \u03f5(\u00b5(t), t) # dt. (21) Further, we propose BFN-Solver, a customized ODE solver for BFN in analogy to DPM-Solver in Eq. (7). As detailed in Appendix A.3, we integrate all linear terms and apply a change of variable from t to \u03bb(t) = 1 2 log \u03b3(t) 1\u2212\u03b3(t) to obtain a simplified exact solution of Eq. (21) \u00b5(t)= \u03b3(t) \u03b3(s)\u00b5(s)\u2212\u03b3(t) Z \u03bb(t) \u03bb(s) e\u2212\u03bb\u02c6 \u03f5(\u00b5(t\u03bb(\u03bb)), t\u03bb(\u03bb)) d\u03bb, (22) where t\u03bb(\u00b7) is the inverse function of \u03bb(t) for 0 \u2264t < s < 1 \u2212\u03b7. Eq. (22) differs from Eq. (6) only in certain coefficients. Given an initial value \u00b50 and time steps {ti}n i=0 from t0 = 1\u2212\u03b7 to tn = 0, BFN-Solver1 is derived similarly to Eq. (7) and given by \u00b5i = \u2212 p \u03b3(ti)(1 \u2212\u03b3(ti))(ehi \u22121)\u02c6 \u03f5(\u00b5i\u22121, ti\u22121) + \u03b3(ti) \u03b3(ti\u22121)\u00b5i\u22121, (23) where hi = \u03bb(ti) \u2212\u03bb(ti\u22121). We refer the readers to Appendix A.3 for higher-order solvers of both ODE and SDE.4 Empirically, as presented in Sec. 6.2, BFN-Solvers of different orders significantly outperform the original BFN sampler with a limited number of NFEs based on the same model. 5. Continuous-time BFN on Discrete Data In a manner akin to Sec. 4, this section unifies BFNs on discrete data and (continuous) DMs through SDEs and develops fast samplers for BFNs. However, this adaptation to discrete data is far from straightforward, as it involves SDEs operating on latent variables z \u2014 a significant departure from the original BFN formulation that marginalizes out these variables, rather than updating the distribution parameters \u03b8. Consequently, it is surprising that the training and 4A more straightforward way to get BFN-Solver on continuous data is to treat BFN as a DM with a special noise schedule \u03b1t = \u03b3t and \u03c32 t = \u03b3t(1 \u2212\u03b3t). However, it is infeasible on discrete data. Therefore, we use a slightly complex yet coherent way to derive BFN-Solver throughout the paper. 5 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations \u03b80 z0 \u03b8n x \u03b81 z1 \u03b82 ... zn \u2192\uf0a5 n z z w = + H t t t L t d ( ) ( )d ( )d z z e w \uf0eb \uf0fb \uf0ea \uf0fa = \u2212 \u2212 + \uf0e9 \uf0f9 K L t t t t L t s d ( ) ( ( ), ) d ( )d \u02c6 1 2 z(0) z(0) Bayesian Flow Network on Discrete Data SDE on latent variable z(t) = \u2212\uf068 t 1 = t 0 z(1\u2212\uf068) x \u03b8(0) z(1\u2212\uf068) z2 Figure 2. Illustration of BFN on discrete data and the corresponding SDEs. The SDEs are defined w.r.t. the latent variables z, which are marginalized in BFN, on time [0, 1 \u2212\u03b7]. sampling of BFN on discrete data still connect to the SDE formulation on z. 5.1. Formulating BFN on Discrete Data as SDEs Similar to Theorem 4.1, the truncated noise-adding process of the continuous-time BFN on discrete data in Eq. (11) uniquely solves a linear SDE, summarized as follows. Theorem 5.1 (Proof in Appendix B.1). Let \u03b7 > 0 be an arbitrarily small constant. The BFN in Eq. (11) with t \u2208 [0, 1 \u2212\u03b7] is the unique solution of the following linear SDE: dz = H(t)z dt + L(t) dw. (24) Here w is a standard Wiener process and H(t) = \u03b2\u2032(t) \u03b2(t) = \u2212 2 1 \u2212t, (25) L(t)2 = \u2212K\u03b2\u2032(t) = 2K\u03b21(1 \u2212t), (26) where K and \u03b21 are hyperparameters defined in Eq. (11). The rationale for truncation of t and the way to deal with \u03b7 and z(1 \u2212\u03b7) is similar to the continuous data case, detailed in the proof and Sec. 6.1, respectively. Notably, Theorem 5.1 characterizes the dynamics of z instead of \u03b8, as illustrated in Fig. 2. Indeed, the dynamics of \u03b8 do not correspond to a linear SDE as \u03b8 is a nonlinear transformation of z as shown in Eq. (11). It is implied that the original sampling process in Eq. (14) does not directly discretize the linear SDE, as detailed in Sec. 5.3. The associated reverse-time SDE (Song et al., 2021) for the linear SDE in Eq. (24) in [0, 1 \u2212\u03b7] is given by dz = [H(t)z \u2212L(t)2\u2207z log pt(z)] dt + L(t) d \u00af w, (27) where \u2207z log pt(z) is the unknown score function, defined on z instead of \u03b8. 5.2. Training as Parameterizing the Reverse-time SDE It is nontrivial to see yet can be proven that the training objective of the continuous-time BFN on discrete data in Eq. (12) is a reparameterized form of DSM (Vincent, 2011) w.r.t. z, as summarized in the following Theorem 5.2. Theorem 5.2 (Proof in Appendix B.2). Minimizing the continuous-time loss of BFN on discrete data in Eq. (12) is equivalent to minimizing the DSM loss in Eq. (4). Besides, the corresponding estimate of the score function is given by \u02c6 s(z(t), t) = \u2212z(t) K\u03b2(t) + \u02c6 es(z(t), t) \u22121 K , (28) where \u02c6 es(z(t), t) is the network trained by BFN. Theorem 5.1 and Theorem 5.2 distinct BFNs from existing discrete DMs. Specifically, BFNs applied to discrete data solve a linear SDE and are trained using DSM, which aligns seamlessly with continuous DMs. Consequently, without changing the discrete data, BFNs are significantly simpler and more scalable and efficient than the related work, leveraging advancements in continuous DMs. We provide a comprehensive review and discussion in Sec. 2. 5.3. Sampling as Discretizing the Reverse-time SDE Plugging Eq. (28) into Eq. (27), we get a parameterized reverse-time SDE in [0, 1 \u2212\u03b7] for sampling as follows (29) dz = \u2212L(t)2 \u0014 \u02c6 es(z(t), t) \u22121 K \u0015 dt + L(t) d \u00af w. The following Proposition 4.2 suggests that the sampler proposed in the original BFN paper approximately discretizes the parameterized reverse-time SDE. Proposition 5.3 (Proof in Appendix B.3). If the categorical sampling step in the BFN sampler on discrete data (i.e., Eq. (13)) is omitted, then it is a first-order discretization of the parameterized reverse-time SDE in Eq. (29). The role of the categorical sampling step is still unclear in theory. However, experiments in Fig. 6 (Sec. 6.3) reveal that removing the categorical sampling step leads to consistently better performance in fewer than 50 NFEs, and almost the same results otherwise. 6 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations 5.4. Probability Flow ODE and Faster Sampling Similar to the continuous case, the equivalent probability flow ODE of the parameterized reverse-time SDE on discrete data in Eq. (29) is dz = \u001a \u2212 1 1 \u2212tz(t) \u2212\u03b21(1 \u2212t)[K \u02c6 es(z(t), t) \u22121] \u001b dt. (30) For 0 \u2264t < s < 1 \u2212\u03b7, its solution can be written as (31) z(t) = 1 \u2212t 1 \u2212sz(s) + \u03b21(1 \u2212t)(t \u2212s) \u2212K\u03b21(1 \u2212t) Z t s \u02c6 es(z(\u03c4), \u03c4) d\u03c4. Again, we propose BFN-Solver on discrete data, and the first-order version is given by zi =\u03b21(1 \u2212ti)(ti \u2212ti\u22121)(1 \u2212K \u02c6 es(z(ti\u22121), ti\u22121)) + 1 \u2212ti 1 \u2212ti\u22121 zi\u22121. (32) Notably, we map the latent zM to the distribution parameter \u03b8M = softmax(zM) at the last step to obtain the final samples. We refer the readers to Appendix B.5 for higher-order solvers of both ODE and SDE. As presented in Sec. 6.3, the conclusion on the improvement of BFN-Solvers over the original BFN sampler remains the same on discrete data. 6. Experiments We present the experimental setups in Sec. 6.1. We validate the proposed BFN-Solvers on continuous and discrete data, in Sec. 6.2 and Sec. 6.3 respectively. 6.1. Experimental Settings Model. We employed the pre-trained models provided by the BFN (Graves et al., 2023) in all experiments for fairness. Datasets. For continuous data, the model is trained on the CIFAR-10 (Krizhevsky et al., 2009) dataset which contain 50K training images. For discrete data, the model is trained on the text8 (Mahoney, 2011) dataset which contains 90M consecutive characters, each character is a lower Latin letter \u2018a\u2019-\u2018z\u2019 or the \u2018 \u2019 whitespace token, giving a class number of 27. Each sample is a sequence of 256 characters. Metrics. For continuous data, we adopt the widely used FID (Heusel et al., 2017) as the sample quality metric. We compute the FID metric on 10K generated samples for efficiency. For discrete data, there is no widely adopted samplebased metric comparable to FID in image modeling. Given our reliance on a simple character-level text dataset, we found that spelling accuracy (SA) is a straightforward yet effective metric for measuring the quality of text generation. Specifically, SA is defined as the ratio of correctly spelled words to the total words in the entire generated sequence, which is segmented by spaces. In each experiment, we collect 1,000 generated samples to calculate the metric. Additionally. we conducted a user study for text generation quality evaluation. For the user study, there are 100 questions for each one vs. one comparison (e.g., BFN vs. BFN-Solver1). In each question, participants were presented with two sentences randomly generated from two methods. Participants were instructed to choose a sentence of higher quality, which is known as the two-alternative forced choice methodology (Kawar et al., 2023; Bar-Tal et al., 2022; Park et al., 2020). Please see Appendix C for more experimental details. Truncation. \u03b7 is a manually tuned hyperparameter specified in each experiment. For both p1\u2212\u03b7(\u00b5) and p1\u2212\u03b7(z), we found an isotropic Gaussian with zero mean and a calculated variance works well. We provide preliminary analyses of the variance in Appendix C.1. 6.2. Fast Sampling on Continuous Data We compare our proposed fast sampling methods with the original BFN continuous sampler in this section. As illustrated in Fig. 3, with the NFE less than 100, BFNSolver++1, BFN-Solver++2, and SDE-BFN-Solver++2 significantly outperform the BFN baseline. Moreover, BFN-Solver++2 achieves better results compared to BFNSolver++1. When the NFE is higher (e.g., more than 500), our observations reveal that SDE-based samplers exhibit slightly better performance over ODE-based samplers, which aligns with the diffusion model (Song et al., 2020; Karras et al., 2022; Nie et al., 2023). Please see Appendix D.1 and Appendix D.5 for more quantitative results and randomly generated images, respectively. We slightly tune the hyperparameter \u03b7 for our methods on different NFEs to get the best results, as detailed in Appendix D.3. 6.3. Fast Sampling on Discrete Data We compare our proposed fast sampling methods with the origin BFN discrete sampler in this section. As illustrated in Fig. 4, with the NFE less than 30, BFNSolver1, BFN-Solver2, and SDE-BFN-Solver2 significantly outperform the BFN baseline. Moreover, BFN-Solver2 and SDE-BFN-Solver2 achieve better results compared to BFNSolver1, agreeing with the continuous case. We provide a preliminary user study in Fig. 5 with 10 NFEs and the results align with Fig. 4. When the NFE is higher (e.g., more than 7 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations 10 20 50 100 200 500 1000 NFE 0 50 100 150 200 250 FID BFN sampler SDE-BFN-Solver++2 (ours) BFN-Solver++1 (ours) BFN-Solver++2 (ours) Figure 3. Fast sampling results on the continuous CIFAR-10 dataset. Sampling quality is measured by FID \u2193, varying the NFE. 10 20 30 50 100 200 1000 NFE 70 75 80 85 Spelling Accuracy BFN sampler SDE-BFN-Solver2 (ours) BFN-Solver1 (ours) BFN-Solver2 (ours) Figure 4. Fast sampling results on the discrete text8 dataset. Sampling quality is measured by SA \u2191, varying the NFE. 500), we observe that SDE-based samplers exhibit slightly better performance than ODE-based samplers, which aligns with existing results in DMs. Please see Appendix D.2 and Appendix D.5 for more quantitative results and randomly generated texts. We find that the hyperparameter \u03b7 = 0.001 is sufficient for all NFEs for BFN-Solvers to get excellent results. We refer the readers to Appendix D.4 for more details. Finally, we perform an ablation of the original BFN solver in Fig. 6 and find that an exact solver that just removes the categorical sampling step from the BFN sampler works better, conforming to our theory. Figure 5. User study results on the discrete text8 dataset with 10 NFE. We present the preference rates (with 95% confidence intervals) of BFN-Solver1 and BFN-Solver2 over BFN baseline. 10 20 30 50 100 200 1000 NFE 70 75 80 85 Spelling Accuracy BFN sampler BFN Sampler without CS Figure 6. Ablation of the categorical sampling (CS) step in the BFN sampler on the text8 dataset. Sampling quality is measured by SA \u2191, varying the NFE. 7. Conclusion We unify BFNs and DMs by identifying the linear SDEs pertinent to the noise-addition processes in BFNs, illustrating that BFN\u2019s regression losses correspond with denoise score matching, and validating the sampler in BFN as an effective first-order solver for the related reverse-time SDE. Motivated by these insights, we implement fast sampling techniques from DMs in BFNs, yielding promising results. Building upon the established results of DMs, this paper establishes a principled and systematic approach to the analysis and enhancement of BFNs and future work includes the development of predictor-corrector samplers (Song et al., 2020; Zhao et al., 2023), improved methods for likelihood evaluation (Bao et al., 2022b;a), and novel training strategies to refine (Karras et al., 2022) and scale BFNs (Rombach et al., 2022). 8 Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations Limitations of the paper include the scale of the datasets and evaluation metrics. In our experiment, for a fair comparison, we leverage the pre-trained models of BFNs, which are all trained on small datasets. Further, the samplers cannot be directly used in likelihood evaluation and we mainly employ the FID and spelling accuracy as surrogates for the sample quality, potentially introducing bias. Hopefully, these limitations can be solved by scaling up BFNs to common benchmarks, as mentioned in future work. Impact Statements This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here." + }, + { + "url": "http://arxiv.org/abs/2404.08949v1", + "title": "Multimodal Cross-Document Event Coreference Resolution Using Linear Semantic Transfer and Mixed-Modality Ensembles", + "abstract": "Event coreference resolution (ECR) is the task of determining whether\ndistinct mentions of events within a multi-document corpus are actually linked\nto the same underlying occurrence. Images of the events can help facilitate\nresolution when language is ambiguous. Here, we propose a multimodal\ncross-document event coreference resolution method that integrates visual and\ntextual cues with a simple linear map between vision and language models. As\nexisting ECR benchmark datasets rarely provide images for all event mentions,\nwe augment the popular ECB+ dataset with event-centric images scraped from the\ninternet and generated using image diffusion models. We establish three methods\nthat incorporate images and text for coreference: 1) a standard fused model\nwith finetuning, 2) a novel linear mapping method without finetuning and 3) an\nensembling approach based on splitting mention pairs by semantic and\ndiscourse-level difficulty. We evaluate on 2 datasets: the augmented ECB+, and\nAIDA Phase 1. Our ensemble systems using cross-modal linear mapping establish\nan upper limit (91.9 CoNLL F1) on ECB+ ECR performance given the preprocessing\nassumptions used, and establish a novel baseline on AIDA Phase 1. Our results\ndemonstrate the utility of multimodal information in ECR for certain\nchallenging coreference problems, and highlight a need for more multimodal\nresources in the coreference resolution space.", + "authors": "Abhijnan Nath, Huma Jamil, Shafiuddin Rehan Ahmed, George Baker, Rahul Ghosh, James H. Martin, Nathaniel Blanchard, Nikhil Krishnaswamy", + "published": "2024-04-13", + "updated": "2024-04-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Imagine two newspaper articles about the same event. The articles come from different sources with radically different perspectives and report the event with very different language. They use dif- ferent action verbs, include ambiguous pronominal references, describe causes differently, and even attribute different intentionality to the event\u2014for ex- ample, \u201cBuzina, 45, was shot dead\u201d vs. \u201cHe was murdered\u201d. An automated system may be unable to identify from the text alone that the two events de- scribed are actually the same. This is the problem of cross-document coreference resolution (CDCR) of events: inferring that two event mentions in dif- ferent documents actually refer to the same thing. Now imagine that each of the articles is accompa- nied by an image. While not identical, they clearly contain the same people, entities, and actions. This would be strong evidence to a reader that the two events described in the different articles are in fact the same. Purely text-based approaches to CDCR, while built on sophisticated Transformer-based language models (LMs) (Vaswani et al., 2017; Beltagy et al., 2020), are blind to such potentially useful multi- modal information. This problem is exacerbated by the relative dearth of multimodal information in- cluded in event CDCR corpora. *This work conducted at Colorado State University. In this work, we propose a novel multimodal event CDCR method. Where current state-of-the- art coreference approaches that consider visual information demonstrate the utility of a multimodal approach, they do so at a high computational cost (Guo et al., 2022). Furthermore, they typically focus on linking objects rather than events. We ad- dress the sparsity of multimodal data in benchmark datasets by retrieving images associated with the metadata of event mentions, and generating event- centric images with state-of-the-art image diffusion models. We perform coreference experiments in a fully multimodal setting and rigorously test the contribution of multimodal information to CDCR.1 In total, our novel contributions include: \u2022 A novel approach to multimodal cross docu- ment event coreference (MM-CDCR) including a low-compute, bidirectional linear semantic transfer technique (Lin-Sem) based on se- mantic equivalence across modalities; \u2022 A model ensemble hybrid approach that ap- plies text-only or multimodal methods to differ- ent categories of mention pairs based on their semantic and discourse-level difficulty; \u2022 A novel method for enriching text-only coref- erence datasets (e.g., ECB+ (Cybulska and Vossen, 2014)) with event-centric images us- ing generative image diffusion; 1Our code can be accessed at https://github. com/csu-signal/multimodal-coreference. arXiv:2404.08949v1 [cs.CL] 13 Apr 2024 \u2022 A new benchmark result on the AIDA Phase 1 dataset (Tracey et al., 2022), an explicitly multi- modal event CDCR dataset. To our knowledge, this is the first evaluation performed over this dataset.", + "main_content": "Cross-Document Event Coreference Resolution Most previous works on CDCR have been limited to text-only (Eisenstein and Davis, 2006; Chen et al., 2011). Early works (e.g., Humphreys et al. (1997); Bagga and Baldwin (1999); Chen and Ji (2009)) used supervised training over features like part-ofspeech tags, phrasal-matching, or aligned arguments. While Kenyon-Dean et al. (2018) enhanced lexical features with \u201cstatic\u201d embeddings like contextual word2vec (Mikolov et al., 2013), most recent works (Yu et al., 2022; Caciularu et al., 2021; Yadav et al., 2021; Nath et al., 2023) uses latent representations from Transformer-based encoders to compute pairwise mention scores of possible antecedents. Works such as Held et al. (2021) and Ahmed et al. (2023) overcome the quadratic complexity of the mention pair architecture by pruning negative pairs using discourse-coherence and lexical similarity (synonymous lemma pairs) respectively. We use Ahmed et al. (2023)\u2019s \u201coracle\u201d assumption for our pruning procedure. Multimodal Frameworks Most previous works in multimodal vision-language processing (e.g., (Le et al., 2019; Tan and Bansal, 2019)) have been compute-intensive, using separate encoders for visual and linguistic inputs, and auxiliary encoders for cross-modal or query-related modeling. Highperforming but high-compute models like ViLBERT (Lu et al., 2019) concatenate embeddings from different modalities before fine-tuning. Works such as Li et al. (2020), Tong et al. (2020), and Chen et al. (2021) leverage a common representation space for coreference-adjacent tasks like event extraction and detection in images and videos, but emphasize finding relations within a document or a topic. Works specific to multi-modal entity coreference resolution such as Guo et al. (2022) treat it largely as a grounding problem, using graph networks to link references in dialogue to items in a scene before feeding representations into BERT-style encoders to resolve scene-based visual-linguistic coreference chains. Our work is multimodal, cross-document, and event focused, and performs faster with the aid of linear mappings. Linear Projection Across Neural Networks Previous research within computer vision has explored using affine (McNeely-White et al., 2020, 2022; Jamil et al., 2023) as well as non-linear (Lenc and Vedaldi, 2015) transformations to explore equivalence of unimodal function approximators like CNNs. They show that two distinct, highly non-linear neural networks can learn similar properties transferable up to a linear projection while retaining near-equivalent performance on tasks like image classification or facial recognition. Similar techniques using affine mappings were reported in Merullo et al. (2023), who explore the equivalence of such approximators across modalities while also casting new light on high-fidelity transfer of nonlinguistic features into a generative LLM via unidirectional linear projections from image spaces. Nath et al. (2022) demonstrated linear mappings also preserve information across language models. Ghaffari and Krishnaswamy (2023) showed the same between language models and neural networks trained over tabular data. We use a low-compute, cross-modal, bidirectional linear-mapping technique (Lin-Sem: Linear Semantic Transfer) between language and vision Transformers, on the challenging event coreference task. We demonstrate where this linear transfer is providing useful information toward coreference resolution compared to a text-only discriminative LLM, or fused modality models following standard fine-tuning. 3. Methodology Fig. 1 illustrates the pipeline for our methodology, the components of which are detailed as follows. Semantic Equivalence V \ufffd x,y,\u03d5(x,y) \ufffd :Rn\u00d7w\u00d7h\u00d73\u2192Rn\u00d7H (1) LLM \ufffd x,y,\u03d5(x,y) \ufffd :Rn\u00d7m\u2192Rn\u00d7H (2) \ufffd \ufffd LLM \ufffd x,y,\u03d5(x,y) \ufffd :Rn\u00d7m\u2192Rn\u00d7H (2) nd (2) represent the heterogeneous im\ufffd \ufffd Let (1) and (2) represent the heterogeneous image and text representations for vision and text Transformer models respectively. (x, y) \u2208\u03c7 represents all the pairs of samples in sample space \u03c7, \u03d5(x, y) represents the concatenation of the image or text pair in their respective modalities, n and H represent the total sample pairs and hidden dimensions respectively, and m is the LLM\u2019s max token-length. We define cross-modal semantic equivalence as follows: two representations V and LLM in distinct modalities are semantically equivalent if there exists a bidirectional map MV\u2194LLM s.t.: \ufffd \ufffd \ufffd \ufffd (3) V\u2194LLM \u2200x,y\u2208\u03c7:V \ufffd x,y,\u03d5(x,y) \ufffd \u2248MLLM\u2192VLLM \ufffd x,y,\u03d5(x,y) \ufffd (3) \u2200x,y\u2208\u03c7:LLM\ufffd x,y,\u03d5(x,y)\ufffd \u2248MV \ufffd x,y,\u03d5(x,y)\ufffd (4) \ufffd \ufffd \ufffd \ufffd \u2200x,y\u2208\u03c7:LLM\ufffd x,y,\u03d5(x,y)\ufffd \u2248MV\u2192LLMV \ufffd x,y,\u03d5(x,y)\ufffd (4) hile assuming both V and LLM to be bijective or vertible, so, \ufffd \ufffd \ufffd \ufffd while assuming both V and LLM to be bijective or invertible, so, \ufffd \ufffd \ufffd \ufffd tible, so, MLLM\u2192V=V \ufffd x,y,\u03d5(x,y) \ufffd \u25e6LLM \ufffd x,y,\u03d5(x,y) \ufffd-1 (5) MV\u2192LLM=LLM \ufffd x,y,\u03d5(x,y) \ufffd \u25e6V \ufffd x,y,\u03d5(x,y) \ufffd-1 (6) \ufffd \ufffd \ufffd \ufffd MV\u2192LLM=LLM \ufffd x,y,\u03d5(x,y) \ufffd \u25e6V \ufffd x,y,\u03d5(x,y) \ufffd-1 (6) Arg1 Paired Arg2 Arg1*Arg2 Ridge\u00a0 Regressor \u00a0Bidirectional\u00a0 Mapping\u00a0 Bridge Matrix ...a verdict that means he could be \u00a0 executed \u00a0 if these same jurors vote... \u00a0...found guilty\u00a0...while the death recommendation ... \u00a0 Text Corpus\u00a0 Arg1 Paired Arg2 Arg1*Arg2 Stable Diffusion\u00a0 Vision Modality Encoders Text Encoder Training Phase Event Mention 1 (text) Event Mention 2 (text) A strong 6.1-magnitude earthquake struck the northwestern Indonesian\u00a0province.. ...injured in the earthquakes , which rekindled bitter memories of similar deadly quakes... Event Mention 1 Event Mention 2 Bidirectional Mapped Representations\u00a0 Generated Images Lin-Sem coeffcients Pairwise Scorer (MLP) Vision Modality Encoders Text Encoder Text Encoder Coreference Decision\u00a0 Mixed-Modality Ensembles Testing Phase Real Images Figure 1: Our approach for Multimodal CDCR using Lin-Sem. Linear Mapping (Lin-Sem) procedure between the distinct text and image embedding spaces for an event pair in the ECB+ corpus. Arg1 and Arg2 refer to the individual images in the pair and the trigger events (in yellow) surrounded by the and special tokens embedded in the text-encoder (LLM). Since a closed-form solution to analytically derive the mapping function MV\u2194LLM is not always feasible and since many task-based fine-tuning heads over a Transformer-based LLM involve fitting a linear classification layer, we propose a parameterefficient linear-mapping technique Lin-Sem. We estimate the mapping function within a empirical risk minimization framework by using a ridge regression between the two cross-modal representations. Mathematically, MLLM\u2192V \u2190minimize ((V\u2212\u03b2LLM)T (V\u2212\u03b2LLM)+\u03bb\u03b2T \u03b2) (7) MV\u2192LLM \u2190minimize ((LLM\u2212\u03b2V)T (LLM\u2212\u03b2V)+\u03bb\u03b2T \u03b2) (8) We assume \u03bb=1 while \u03b2 represents the L2-norm regularization parameter. Datasets We evaluated our methods on the ECB+ (Cybulska and Vossen, 2014) and the AIDA Phase 1 (Tracey et al., 2022) datasets. While the former is a popular, English-only CDCR benchmark containing a diverse range of news articles, the latter contains multimodal resources specific to Russia-Ukraine relations, in English, Russian, and Ukrainian. We focus only on the English documents.2 For our experiments, we used training and 2The AIDA Phase 1 dataset was created for the DARPA Active Interpretation of Disparate Alternatives (AIDA) program and is available from the Linguistic Data ECB+ AIDA Phase 1 Split Train Dev Test Practice Eval Docs 594 196 206 63 69 Event Mentions 3808 1245 1780 603 846 Clusters 1464 409 805 186 270 Singletons 1053 280 623 132 197 Images 3808\u2217 1245\u2217 1780\u2217 417 662 Table 1: ECB+ and AIDA corpus-level statistics. Tracey et al. (2022) refers to the provided train and test sets as \u201cpractice\u201d and \u201ceval\u201d, respectively. \u2217Including images generated using Stable Diffusion. evaluation splits following Cybulska and Vossen (2015) for ECB+ and Tracey et al. (2022) for AIDA Phase 1. Table 1 shows corpus-level statistics for these two datasets. Augmenting ECB+ with Images Since ECB+ does not provide images in their metadata, we scraped through the links provided in the documents and searched the Internet Archive for archived versions of articles with dead links. For original ECB documents without links, we manually search for keywords to retrieve articles. Out of 502 ECB+ document links, 43% were broken, but 50% could be recovered using web.archive.org. Of Consortium (catalog number LDC2019E77). It is the only published ECR benchmark that contains multimodal resources specific to cross-document coreference. Events here are specifically in the domain of Russia-Ukraine relations and annotated based on both saliency and the potential for conflicting perspectives. 480 ECB documents, 51% were located via Google search. We retrieved a total of 543 images; 235 of 982 documents had at least one associated image. In addition to the overall lack of images, the retrieved document-level images may be poor representatives of individual event mentions, leading to the sparsity problem mentioned in Sec. 1. Therefore, we used Stable Diffusion (Rombach et al., 2022) to generate more relevant images and provide enough data to explore the contribution of multimodal information to ECR. Photo-realistic images were generated using sentences from ECB+ as prompts. Since a sentence can refer to multiple events, we provided an additional signal in the prompt by marking the event trigger with special tokens ( and ). Image Encoding To encode all images as vector representations, we used three variations of Vision Transformers (ViT; Dosovitskiy et al. (2021), BEiT; Bao et al. (2021), and SWIN; Liu et al. (2021)), as well as CLIP (Radford et al., 2021). Resulting representations were the pooled output of the firsttoken representations from the last encoder layer for the image sequence, akin to the [CLS] token in BERT variants. Encoding the images through distinct embedding spaces decoupled them from the original language inputs. Linear Projection Technique To project image and text representations across modalities, we first created a concatenated 3,072D (768\u00d74) representation for an image/text pair. These concatenated representations contained the paired representation, the individual mention representations (Arg1 and Arg2), and their element-wise product (in that order). Separate concatenated representations were constructed for each modality (see Fig. 1).3 We then used a ridge regressor to calculate the linear coefficients by minimizing the squared distances between concatenated representations from each modality for the training set. This gave us two square (3,072\u00d73,072) \u201cbridge\u201d matrices: MLLM\u2192V and MV\u2192LLM. We hypothesized that this bidirectional map retains crucial semantic information that a structure-preserving linear map would transfer between the two modalities. At evaluation, we matrix-multiplied the test concatenated representations with these matrices while maintaining the directionality of the linear map. These mapped representations were fed into a pairwise-scorer to get coreference clusters (see Fig. 1). 3All language representations came from the pretrained Longformer model (Beltagy et al., 2020). Model Training and Fine-Tuning Following Humeau et al. (2020); Cattan et al. (2021), i.a., we trained separate pairwise scorers P\u03b8,\u03b8\u2032: (AB, BA)\u2192S1, S2 on ECB+ and AIDA Phase 1. Here AB and BA are the 3,072D combined representations in A\u2192B and B\u2192A directions respectively, and \u03b8 and \u03b8\u2032 are the parameters of the pairwise scorer and the LLM, respectively. This output two scores for each directional encoding, each representing the probability that the event mention pair was coreferent.4 Thereafter, we used the CoVal Scorer (Moosavi et al., 2019) to form the final coreference clusters after applying transitive closure to identify the connected components with a threshold of 0.5 for all models. We used the same pairwise-scorer for all linear maps. For a direct multimodal comparison, we finetuned fused-modality models. We concatenated the image representations with the text representations and trained four separate pairwise scorers for each combination. Due to data sparsity of real images, we only trained fused models using generated event-centric images. Training took roughly 1.0 and 1.5 hours per epoch for the LLM and the fused models, respectively. For comparison, linear mapping took \u223c3s to learn a mapping between modalities. Fig. 2 shows log GPU seconds required for pairwise encoding for text, image, and fused modalities vs. bidirectional linear projection. Figure 2: Pairwise encoding time in GPU seconds (log-scale on y-axis) for text (Longformer), vision (ViT), and fused models vs. Bidirectional Linear Mapping (Lin-Sem) as a function of the number of train pairs in ECB+. 3.1. Categorizing Mention Pair Difficulty To empirically evaluate the contribution of crossmodal information toward resolving challenging event mention pairs, we used the gold-standard coreference labels to categorize unseen pairs at inference as easy or hard based on semantic and 4As a human reader would likely make a consistent coreference decision regardless of which event description she read first, we used the mean of the two scores as the final probability score for training and inference. Figure 3: Kernel Density Estimation plots of semantic-discourse similarity scores (including WuPalmer similarity) for mention pair difficulty categories in ECB+ (L) and AIDA Phase 1 (R), showing a clear demarcation of easy and hard pairs in positive and negative labels. easy_pos and hard_neg pairs have a high semantic similarity distribution while easy_neg and hard_pos pairs have lower semantic similarity distribution. discourse-level similarities. For semantic similarities, we use Wu-Palmer Similarity (Wu and Palmer, 1994), and cosine similarity metrics. For discourselevel similarities, metadata in both datasets provides information about within-topic and withindocument events which we used to score event similarities. For instance, an event pair within the same document and topic would get the highest discourse-level similarity score. These combined semantic and discourse similarity scores were then bucketed into easy and hard semantic transfer categories based on the means of coreferring and non-coreferring samples (see Fig. 3). An example \u201chard\u201d mention pair from ECB+, involving pronominal coreference, is (1) \u201cIn a move that will expand its services division, Hewlett-Packard will acquire EYP Mission Critical Facilities\u201d and (2) \u201cHP to Acquire Data Center Consultants.\u201d This categorization allowed us to identify cases where multimodal features are distinctly useful based on proportion of correctly resolved hard pairs (see Sec. 4). Table 7 in Appendix A shows examples of easy and hard pairs for coreferring and non-coreferring samples and their respective counts. Computation of Semantic Difficulty Categories It is important to note that the \u201chard\u201d and \u201ceasy\u201d categories include both positive (coreferent) and negative (non-coreferent) samples. These categories are computed based on the assumption that easier coreferent (easy positive) samples should ideally have a higher overall similarity than harder ones, both in terms of semantics and at the topic and discourse level. Similarly, easier non-coreferent samples (easy negative) should ideally have a lower overall similarity. Hard coreferent (hard positive) pairs have lower overall similarity and hard noncoreferent (hard negative) pairs have higher overall similarity when compared to easy pairs of the same label. Overall similarity for a given pair is computed as the sum of four individual scores: 1. whether a pair comes from the same topic (1 for within-topic, 0 for not), 2. whether a pair comes from the same document (1 for within-doc, 0 for not), 3. the Wu-Palmer similarity of the trigger tokens in a pair, and 4. the average cosine similarity of the vectors for the two sentences when encoded in both directions using the text-only, finetuned LLM (Longformer), inspired by (Ahmed et al., 2023). For computing the cosine similarity scores, we take two mention-containing sentences A and B and cross-encode sentence A in context before sentence B and sentence B in context after sentence A. We then take the cosine similarity between these two encoded vectors. The positions of A and B are then reversed and they are again encoded with cross-attention in the same way. Because crossattention is used, this results in different positional encodings for the two sentences and therefore a different cosine similarity value than the first calculation, so these values are then averaged for the final score. Adding the aforementioned four scores gives us the final similarity scores for each pair in each label category (positive and negative). If the final similarity score for an individual positive pair is more than the mean final similarity score for all positive pairs, such a pair is categorized as easy positive. If it is less than this value, it is categorized as hard positive. On the other hand, if the final similarity score for an individual negative pair is more than the mean final similarity score for all negative pairs, the pair is categorized as hard negative, and if it is less than this value, it is categorized as easy negative.5 The plots in Fig. 2 show the differences in the distributions of different sample categories vs. the calculated similarity scores for both the corpora. See Appendix A for more details with computed examples. We use the gold coreference labels to obtain the label categories. However, since this categorization is only used as an evaluation tool for the initial round of experiments and then frozen for the ensembling experiments, the difficulty category-related information is never used during model training. 5The average final similarity for all positive samples over the ECB+ corpus is 2.25, and the average final similarity for all negative samples is 2.14. We assume AIDA Phase 1 comes from a disparate distribution, and so we categorize the difficulty of pairs in it independently using the same procedure. 4. Results and Analysis We evaluate using established coreference metrics (Moosavi et al., 2019), e.g., MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), CEAFe, and CoNLL F1 (the average of MUC, B3 and CEAFe F1) scores. 4.1. ECB+ We present results from Held et al. (2021) as a current, commonly accepted SOTA on ECB+, and from Ahmed et al. (2023), whose computationallyefficient pruning heuristic based on surface lemma similarity we follow to allow us to perform multiple experiments on a smaller compute budget. Direct comparison to text-only model (LLM) performance should be taken as a comparison to Ahmed et al. (2023) due to the preprocessing. Table 2 shows detailed results. Models MUC B3 CEAFe CoNLL Held et al. (2021) 87.5 86.6 82.9 85.7 Ahmed et al. (2023) 90.8 86.7 84.7 87.4 ViT-real\u2192LLM 6.9 63.1 55.1 41.7 BEiT-real\u2192LLM 87.3 80.3 76.7 81.4 SWIN-real\u2192LLM 87.6 79.7 76.5 81.3 CLIP-real\u2192LLM 24.7 66.3 57.5 49.5 LLM\u2192ViT-real 88.2 80.1 77.5 81.9 LLM\u2192BEiT-real 88.3 80.0 77.4 81.9 LLM\u2192SWIN-real 87.9 80.3 77.8 82.0 LLM\u2192CLIP-real 88.3 80.0 77.4 81.9 ViT-gen \u2295LLM 85.1 86.1 80.7 84.0 BEiT-gen \u2295LLM 82.2 84.9 78.1 81.7 SWIN-gen \u2295LLM 82.5 85.1 78.7 82.1 CLIP-gen \u2295LLM 89.3 84.2 82.6 85.4 ViT-gen\u2192LLM 77.4 78.8 71.5 75.9 BEiT-gen\u2192LLM 77.8 79.8 73.7 77.1 SWIN-gen\u2192LLM 79.5 79.6 73.4 77.5 CLIP-gen\u2192LLM 83.0 82.1 76.3 80.5 LLM\u2192ViT-gen 88.1 80.0 77.2 81.8 LLM\u2192BEiT-gen 88.3 80.0 77.4 81.9 LLM\u2192SWIN-gen 88.2 80.1 77.4 81.9 LLM\u2192CLIP-gen 88.3 80.0 77.4 81.9 Table 2: MM-CDCR F1 scores for MUC, B3, CEAFe and CoNLL on ECB+ test set, using LLM only, Lin-Sem (\u201c\u2192\u201d), and domain-fused finetuned versions (\u201c\u2295\u201d). Cited works are previous benchmarks on text-only CDCR. Bold indicates the best performer on each metric. \u201c-real\u201d indicates that the vision space was encoded with real images, while \u201c-gen\u201d indicates generated images. Text-only vs. Multimodal Models Despite the extra training time incurred in training a fusedmodality model with concatenated features (see Fig. 2), we see that the performance of the fused multimodal models does not exceed that of the text-only model (Longformer using Ahmed et al. (2023)\u2019s preprocessing heuristic). Interestingly, the performance gap between linearly-mapped systems and fused modality models is often quite small, despite the higher compute cost of training the fused model. For instance, LLM\u2192BEiT-gen and LLM\u2192BEiT-real (Longformer embeddings mapped into BEiT space) slightly best the CoNLL F1 score of BEiT-gen \u2295LLM, and BEiT-real\u2192LLM is only 0.5 F1 points lower. Similar trends hold when comparing other fused modality models and their linearly-mapped counterparts, such as LLM\u2192SWINgen, LLM\u2192SWIN-real, and SWIN-real\u2192LLM vs. SWIN-gen \u2295LLM. Semantic Transfer Categories In the coreferrence domain, one weakness of the CoNLL F1 metric is that specific evaluation metric-level details are obfuscated\u2014this can be seen in Table 3: although the aforementioned examples achieve comparable CoNLL F1 scores, the linear mappings achieve a much higher MUC and B3 recall, but lower precision, than the comparable fused models. Therefore, we do a proportional analysis of the correctly inferred (true positive) and misclassified (false positive and false negative) samples within the semantic transfer categories (see Table 4). These categorization labels were not used as supervision at any stage of training, fine-tuning, or mapping, and so an analysis of which models do better at which categories can illuminate different properties of the models, despite similar numerical performance. Table 4 shows the proportion of each result category per model, of samples that would be considered \u201chard\u201d according to the mention pair difficulty categorization described in Sec. 3. Models MUC B3 R P R P LLM\u2192ViT-gen 98.7 79.6 97.6 67.7 LLM\u2192BEiT-gen 99.1 79.6 97.9 67.7 ViT-gen \u2295LLM 80.9 89.7 85.4 86.9 BEiT-gen \u2295LLM 75.9 89.7 82.5 87.5 Table 3: MUC and B3 precision and recall comparison between linear mappings and comparable fused models. Within true positives (TP), linearly-mapped models, using both real and generated images, tended to correctly retrieve a higher proportion of hard pairs compared to the text-only and fused models. For instance, for generated images, the hard sample proportion retrieved by text-to-image models is almost 4 percentage points higher than that of text-only or fused models, while image-to-text models, though lower on average, still also correctly retrieve a higher proportion of hard pairs. This effect appears slightly more pronounced on average in the case of real images (avg. 51.8% hard pairs in TPs, compared to 50.1% for generated images, and 46.6% for text-only). Semantic Transfer Categories Models TP-Hard FP-Hard FN-Hard ECB+ Ahmed et al. (2023) 0.466 0.521 0.607 ViT-real\u2192LLM 0.625 0.250 0.506 BEiT-real\u2192LLM 0.521 0.436 0.434 SWIN-real\u2192LLM 0.510 0.451 0.407 CLIP-real\u2192LLM 0.476 0.536 0.508 LLM\u2192ViT-real 0.507 0.456 0.441 LLM\u2192BEiT-real 0.506 0.000 0.000 LLM\u2192SWIN-real 0.496 0.438 0.700 LLM\u2192CLIP-real 0.505 0.452 0.708 ViT-gen \u2295LLM 0.432 0.591 0.635 BEiT-gen \u2295LLM 0.437 0.606 0.584 SWIN-gen \u2295LLM 0.404 0.620 0.642 CLIP-gen \u2295LLM 0.477 0.506 0.729 ViT-gen\u2192LLM 0.487 0.472 0.521 BEiT-gen\u2192LLM 0.471 0.445 0.525 SWIN-gen\u2192LLM 0.548 0.433 0.478 CLIP-gen\u2192LLM 0.483 0.490 0.534 LLM\u2192ViT-gen 0.505 0.449 0.541 LLM\u2192BEiT-gen 0.506 0.451 0.000 LLM\u2192SWIN-gen 0.505 0.452 0.531 LLM\u2192CLIP-gen 0.506 0.451 0.632 AIDA Phase 1 LLM 0.561 0.385 0.695 ViT-real\u2192LLM 0.609 0.368 0.734 BEiT-real\u2192LLM 0.661 0.328 0.629 SWIN-real\u2192LLM 0.660 0.327 0.636 CLIP-real\u2192LLM 0.627 0.332 0.657 LLM\u2192ViT-real 0.643 0.346 0.929 LLM\u2192BEiT-real 0.638 0.352 0.749 LLM\u2192SWIN-real 0.667 0.333 0.562 LLM\u2192CLIP-real 0.648 0.341 0.000 Table 4: Table showing the proportion of hard event pairs within the true positive (TP), false positive (FP) and false negative (FN) samples based on semantic transfer category (Sec. 3) for ECB+. Values of 0 indicate that no cases fit this category, resulting in zero numerator. Ensembling Models The apparent facility of different models at correctly retrieving mention pairs of different semantic difficulties led to a question: since the mention pair difficulty was never used during training, fine-tuning, or mapping, and only as an analytic tool, could we split the mention pairs according to their difficulty, and use the different model types to handle mention pairs they on average appear to be better at? We therefore built an ensembling approach using the text-only model to handle easier pairs, and performed a grid-search through different combinations of the previously-trained multimodal models to handle harder pairs. We allowed for different multimodal models to potentially handle hard-positive pairs and hard-negative pairs and used the combined results from all models to compute the coreference metrics. Table 5 shows the best performing ensembles. Our best performing ensemble model used ViT-real\u2192LLM to handle hard negative pairs, LLM\u2192BEiT-real, to handle hard positive pairs, and the text-only language model to handle easy pairs. Models MUC B3 CEAFe CoNLL Held et al. (2021) 87.5 86.6 82.9 85.7 Ahmed et al. (2023) 90.8 86.7 84.7 87.4 ViT-gen \u2295LLM + LLM 89.1 86.5 84.8 86.8 BEiT-gen \u2295LLM + LLM 87.5 85.7 83.9 85.7 SWIN-gen \u2295LLM + LLM 87.5 85.9 83.8 85.7 CLIP-gen \u2295LLM + LLM 90.1 85.3 83.8 86.4 ViT-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 90.8 85.2 84.8 86.9 BEiT-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 91.3 85.5 86.5 87.8 SWIN-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 90.4 84.4 83.8 86.2 CLIP-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 91.2 85.3 85.7 87.4 LLM\u2192ViT-gen + LLM\u2192BEiT-gen + LLM 88.7 82.3 79.4 83.5 LLM\u2192BEiT-gen + LLM 88.7 82.2 79.1 83.3 LLM\u2192SWIN-gen + LLM\u2192BEiT-gen + LLM 88.7 82.2 79.1 83.3 LLM\u2192CLIP-gen + LLM\u2192BEiT-gen + LLM 88.7 82.2 79.1 83.3 ViT-real\u2192LLM + LLM\u2192BEiT-real + LLM 94.5 89.5 91.8 91.9 BEiT-real\u2192LLM + LLM\u2192BEiT-real + LLM 88.9 82.4 79.7 83.7 SWIN-real\u2192LLM + LLM\u2192BEiT-real + LLM 88.7 82.2 79.1 83.3 CLIP-real\u2192LLM + LLM\u2192BEiT-real + LLM 94.3 89.3 91.6 91.7 LLM\u2192ViT-real + LLM\u2192BEiT-real + LLM 88.7 82.3 79.3 83.4 LLM\u2192BEiT-real + LLM 88.7 82.2 79.1 83.3 LLM\u2192SWIN-real + LLM\u2192BEiT-real + LLM 89.0 82.7 80.1 83.9 LLM\u2192CLIP-real + LLM\u2192BEiT-real + LLM 88.7 82.2 79.1 83.3 Table 5: MM-CDCR MUC, B3, CEAFe and CoNLL F1 results on ECB+ test set, using ensemble models. Format follows Table 2. Ensemble model names follow the format Hard-N model + Hard-P model + Easy pairs model. LLM was always used to handle Easy pairs. The best performing models for hard negative and hard positives were found using a grid search through different combinations of multimodal models. If only one model besides LLM is listed, that model was used to handle all Hard pairs. This resulted in a CoNLL F1 score of 91.9, with scores of 89.5 or higher across all components of MUC, B3, or CEAFe metrics, showing the ability of this ensemble to score highly on, and balance, multiple measurements. Other ensembles, such as a variant that used CLIP-real\u2192LLM to handle hard negatives, performed at a similar level. Two particularly interesting points emerge: 1) Using both real and generated images, LLM\u2192BEiT routinely performed best at handling hard positive pairs; 2) Many ensemble models using Lin-Sem, especially those using a V \u2192LLM mapping for hard negatives and an LLM \u2192V mapping for hard positives, outperform the fused model/text-only model ensembles, despite the simplicity of the linear transformation. This suggests that not only can visual information be leveraged for correct coreference of semantically more difficult mention pairs, but also that visual information may contain fine-grained cues useful for splitting mention pairs while linguistic information is more useful to cluster them. 4.2. AIDA Phase 1 Table 6 presents a novel baseline on the multimodal AIDA Phase 1 data. This data contains unique challenges, such as a train set that is smaller than the test data, and event descriptions from sources with conflicting perspectives, explicitly addressing the ambiguity and perspective conflict challenges from Sec. 1. Since this data comes with images mappable to individual event mentions, we evaluate using only the provided images. As with ECB+, we find that models using linear mappings compete with or slightly outperform the text only model. Using the same proportional analysis of correct and misclassified samples by difficulty category, we find that linearly-mapped models are also more likely than the text-only to resolve hard pairs correctly on this dataset (avg. hard pairs in TPs: 63.9% for V \u2192LLM, 64.9% for LLM \u2192V, and 56.1% for text-only). We then applied the same ensembling approach to the AIDA data, using the same combination of linear mappings and the LLM according to the difficulty of the mention pair. Again we find that an ensemble model using a V \u2192LLM mapping for hard negatives and an LLM \u2192V mapping for hard positives performs best, although this time the model using CLIP-real\u2192LLM as the hard negative handler comes out on top. Models MUC B3 CEAFe CoNLL LLM 80.7 49.5 54.1 61.4 ViT-real\u2192LLM 85.9 38.4 52.7 59.0 BEiT-real\u2192LLM 85.7 42.6 57.9 62.1 SWIN-real\u2192LLM 82.9 46.4 55.8 61.7 CLIP-real\u2192LLM 78.5 52.4 53.5 61.5 LLM\u2192ViT-real 86.3 37.3 52.7 58.8 LLM\u2192BEiT-real 85.7 40.2 53.1 59.7 LLM\u2192SWIN-real 86.2 39.1 54.4 59.9 LLM\u2192CLIP-real 86.2 37.1 52.3 58.5 ViT-real\u2192LLM + LLM\u2192BEiT-real + LLM 86.2 39.6 54.4 60.1 BEiT-real\u2192LLM + LLM\u2192BEiT-real + LLM 87.1 42.1 60.4 63.2 SWIN-real\u2192LLM + LLM\u2192BEiT-real + LLM 87.1 42.5 60.5 63.4 CLIP-real\u2192LLM + LLM\u2192BEiT-real + LLM 87.1 43.8 62.8 64.6 LLM\u2192ViT-real + LLM\u2192BEiT-real + LLM 86.2 39.0 53.5 59.6 LLM\u2192BEiT-real + LLM 85.8 40.8 54.1 60.2 LLM\u2192SWIN-real + LLM\u2192BEiT-real + LLM 86.6 40.7 56.6 61.3 LLM\u2192CLIP-real + LLM\u2192BEiT-real + LLM 86.2 39.0 53.5 59.6 Table 6: MM-CDCR MUC, B3, CEAFe and CoNLL F1 results on AIDA Phase 1 Eval set. Format follows Tables 2 & 5. LLM denotes Longformer evaluated with Ahmed et al. (2023)\u2019s methodology. 5. Discussion Some specific example pairs where the text-only and fused models fail to link the pair, but ensembles correctly do so, expose certain features crucial for event coreference that are present in visual information and linearly transferable, but missing in text alone or scrambled during model fusion. ECB+ ECB+ examples of this kind include event pairs that require some sense of visual grounding, temporal logic (Schank and Abelson, 1975; Ravi et al., 2023) or pronominal context to resolve. For instance, pairs with pronominal antecedents and misleading lexical overlap like \u201c...dozens of others were seriously injured in the quakes, which also sent small tsunamis...\u201d and \u201c...injured in the earthquakes which rekindled bitter memories of similar deadly quakes...\u201d6 were missed by the LLM 6\u201c[E]arthquakes\u201d vs. \u201cquakes\u201d is misleading lexical overlap as they refer to different earthquakes. The actual A young girl was killed and dozens of others were seriously injured in the quakes , which also sent small tsunamis into Japan 's southeastern coast. Atururi said a 10-year-old girl was killed and at least 40 people were injured in the earthquakes , which rekindled bitter memories of similar deadly quakes that hit the town in 2002. Doctor Who has finally selected its 12th doctor : Peter Capaldi is officially set to replace exiting star Matt Smith as the TARDIS leader , producer Steven Moffat announced on the live BBC special Doctor Who Live : The Next Doctor Sunday.\u00a0 \u00a0Scottish actor best known for his role as Malcolm Tucker in The Thick of It revealed as 12th actor to play the Doctor. Figure 4: Sample coreferent event pairs from ECB+ that were correctly linked by our best multimodal ensemble (ViT-real\u2192LLM + LLM\u2192BEiT-real + LLM), but not by the text-only model. Event-triggers are highlighted in yellow and text in italics illustrates lexical ambiguity or misleading lexical overlap. and fused models. Visual cues, such as damaged buildings or injured people (either in images generated using mentions as prompts, or already present in images in news articles) can help make the link. The aforementioned example is shown in Fig. 4, and the images are generated according to the ECB+ augmentation methodology (Sec. 3). Also in Fig. 4, Steven Moffat and his appear to be ambiguously overlapping to the text-only model, which missed the event mentions that are actually about Peter Capaldi. The two facial images, which are real images associated with the event mentions, help make the link. AIDA Phase 1 Coreferent event mentions in the AIDA dataset are notable for conflicting information, and we find cases such as \u201cCalling people tell about people that are jumping out of the burning building.\u201d vs. \u201cFortytwo people trapped by a fire on the third floor of the stately, Soviet-era Trades Unions building burned, suffocated or jumped to their deaths.\u201d Text-only event triggers are underlined. fails to link ambiguous event triggers, but the images associated with each show the Trades Unions building in Odesa. In such context-sensitive pairs, the paired visual representations (image domain Arg1 and Arg2 in Fig. 1) in Lin-Sem help resolve the coreference by capturing less ambiguous information from images while the text-only pairwise scorer found low contextual similarity between the event triggers. Similarly, we see pairs with ambiguous context or pronominal anaphora, e.g., \u201cBuzina, 45, was shot dead\u201d vs. \u201cHe was murdered\u201d, are frequently missed by the LLM, but not by the ensemble models. In the case of this mention pair, both associated articles contain (different) pictures of the same individual, Oles Buzina, which, as with the ECB+ Peter Capaldi example, aids in the coreference7. Generally, for challenging corpora like AIDA Phase 1, we find visual features like faces, or background cues like angry protesters, press conferences, etc., act as cues for correctly resolving that pair. 6. Conclusion In this paper we have demonstrated the utility of multimodal information in cross-document event coreference. In particular, our results demonstrate that multimodal information is useful for resolving mention pairs whose triggers have low semantic and discourse-level similarity, rendering them difficult for text-only models. We developed a method (Lin-Sem) for using linear transformations between embedding spaces to transfer semantic information between vision and language representation spaces, and used this technique in a model ensembling approach that used Lin-Sem models to handle harder mention pairs and a text-only model for easier pairs. We applied this approach to the popular ECB+ benchmark and established a novel baseline on the challenging, and explicitly multimodal, AIDA Phase 1 dataset (Tracey et al., 2022). Our best performing models beat text-only performance on these datasets by \u223c3 F1 points and establish an upper bound on CDCR performance given the preprocessing used. Our ablation studies show that ensemble systems built upon our mention pair difficulty categories and using structure preserving linear maps can leverage event-specific visual cues to make correct coreference decisions about difficult mention pairs. These visual cues are of course absent in text only models, and are likely scrambled during standard multimodal fusion approaches. As such, our results present a strong case for the utility of multimodal information in NLU tasks like event coreference and argue for future increased development of such resources. Upon 7McNeely-White et al. (2022) present strong evidence for the particular effectiveness of linear transformation in face recognition. publication, we will release our processing pipeline and the generated/scraped images associated with ECB+.8 Our results should be considered in the context of our preprocessing assumptions. We use a computationally-efficient pruning heuristic that allowed us to run the high volume of experiments we showcased on a lower compute budget, while demonstrating the utility of multimodal features for coreference. Our binary semantic transfer categories (easy/hard) do not currently account for semantic similarity between pairs that cross subtopics since corpora like the ECB+ corpus do not contain coreference annotations across sub-topics (Bugert et al., 2021). However, our framework can be easily expanded to corpora like FCC (Bugert et al., 2020), with cross-subtopic events. 7. Future Work Future directions in this line of research include exploring the feasibility of using multimodal cues to align/enhance representation spaces of monolingual LLMs, like the English-only Longformer, for Russian and Ukrainian mention pairs in the AIDA Phase 1 corpus. Given the efficiency of linear transformations and the rarity of coreference-specific parallel corpora, this may help alleviate the compute budgets needed for multilingual LLM pretraining for CDCR. Another interesting direction is evaluating our method for other challenging CDCR datasets like FCC (Bugert et al., 2020) which contains cross-subtopic events or the GVC (Vossen et al., 2018) where the SOTA is lower compared to benchmarks like ECB+. Lastly, this work represents a novel cross-modal case where affine transformations between embedding spaces has been shown to be useful (cf. McNeely-White et al. (2022); Nath et al. (2022); Merullo et al. (2023); Ghaffari and Krishnaswamy (2023)). Future work in this area entails a theoretical exploration of the properties of embedding spaces with a goal of finding performance guarantees where affine transformations successfully preserve information for different AI tasks. Ethics Statement Our ablation studies required a non-trivial computation budget and concomitant resource usage, especially for the fused models with larger scoring heads on top of the LLM. Moreover, even though our LinSem framework is substantially compute-efficient, it still required cross-modal model encoding in generating representations for deploying our linear maps between them. The images generated for this task 8The AIDA Phase 1 data must be properly obtained from the Linguistic Data Consortium. with diffusion models might reflect social, racial, or gender-based stereotypes as are commonly seen in large generative models. Due to the nature of the AIDA Phase 1 data\u2019s focus on Ukrainian-Russian conflict, the events described therein are likely to be distressing to some. Acknowledgements This research was supported in part by grant award FA8750-18-2-0016 from the U.S. Defense Advanced Research Projects Agency (DARPA) to Colorado State University and the University of Colorado, and by a subcontract to the University of Colorado on grant award FA8750-19-2-1004 from DARPA. Views expressed herein do not reflect the policy or position of the Department of Defense or the U.S. Government. All errors are the responsibility of the authors. Bibliographical" + }, + { + "url": "http://arxiv.org/abs/2404.15780v1", + "title": "A self-consistent model for dust settling and the vertical shear instability in protoplanetary disks", + "abstract": "The spatial distribution of dust particles in protoplanetary disks affects\ndust evolution and planetesimal formation processes. The vertical shear\ninstability (VSI) is one of the candidate hydrodynamic mechanisms that can\ngenerate turbulence in the outer disk region and affect dust diffusion.\nTurbulence driven by the VSI has a predominant vertical motion that can prevent\ndust settling. On the other hand, the dust distribution controls the spatial\ndistribution of the gas cooling rate, thereby affecting the strength of\nVSI-driven turbulence. Here, we present a semi-analytic model that determines\nthe vertical dust distribution and the strength of VSI-driven turbulence in a\nself-consistent manner. The model uses an empirical formula for the vertical\ndiffusion coefficient in VSI-driven turbulence obtained from our recent\nhydrodynamical simulations. The formula returns the vertical diffusion\ncoefficient as a function of the vertical profile of the cooling rate, which is\ndetermined by the vertical dust distribution. We use this model to search for\nan equilibrium vertical dust profile where settling balances with turbulent\ndiffusion for a given maximum grain size. We find that if the grains are\nsufficiently small, there exists a stable equilibrium dust distribution where\nVSI-driven turbulence is sustained at a level of alpha_z ~ 10^{-3}, where\nalpha_z is the dimensionless vertical diffusion coefficient. However, as the\nmaximum grain size increases, the equilibrium solution vanishes because the VSI\ncan no longer stop the settling of the grains. This runaway settling may\nexplain highly settled dust rings found in the outer part of some\nprotoplanetary disks.", + "authors": "Yuya Fukuhara, Satoshi Okuzumi", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "astro-ph.EP", + "cats": [ + "astro-ph.EP", + "astro-ph.SR" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "The initial stage of planet formation is the formation of kilometer-sized planetesimals from micron-sized dust grains (for reviews, Johansen et al. 2014; Drazkowska et al. 2023). These dust growth and planetesimal formation depend on the vertical distribution of dust particles. If the dust density at the midplane well exceeds the gas density, the streaming and gravi- tational instabilities set in (e.g., Goldreich & Ward 1973; Sekiya 1998; Youdin & Shu 2002; Youdin & Goodman 2005), lead- ing to planetesimal formation. Understanding what processes determine the dust vertical profile in disks is also essential for interpreting recent high-resolution radio observations showing that the dust at different radial locations or in different disks is settled to different extents (Pinte et al. 2016; Doi & Kataoka 2021; Villenave et al. 2022, 2023; Pizzati et al. 2023, and for a review, Miotello et al. 2023). In protoplanetary disks, the vertical profile of dust particles critically depends on the intensity of gas disk turbulence caus- ing dust diffusion. The question then is what mechanisms gen- erate disk turbulence. Recent theoretical studies have shown that not only the magnetorotational instability (MRI; Balbus & Hawley 1991) but also some thermo-hydrodynamical insta- bilities are important turbulence-driving mechanisms (for re- views, Lyra & Umurhan 2019; Lesur et al. 2023). Among them, the vertical shear instability (VSI; Urpin & Brandenburg 1998; Arlt & Urpin 2004; Nelson et al. 2013; Lin & Youdin 2015) \u00a9 2014. Astronomical Society of Japan. arXiv:2404.15780v1 [astro-ph.EP] 24 Apr 2024 2 Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 is thought to be the leading mechanism for driving turbulence in outer disk regions where the current radio observations best constrain the degree of dust settling. The VSI requires rapid cooling of disk gas in addition to a vertically varying gas orbital velocity (Urpin 2003; Nelson et al. 2013; Lin & Youdin 2015; Manger et al. 2021). Therefore, the VSI tends to operate in the outer disk region with low optical depths (Malygin et al. 2017; Pfeil & Klahr 2019; Fukuhara et al. 2021; Melon Fuksman et al. 2024a), where it may dominate over the MRI (e.g., Cui & Bai 2022). The VSI generates turbulence with a predominant verti- cal gas motion (e.g., Nelson et al. 2013; Stoll & Kley 2014) that can prevent dust settling (Stoll & Kley 2016; Flock et al. 2017b, 2020; Dullemond et al. 2022). This turbulence can also stir dust particles significantly (Stoll & Kley 2016; Flock et al. 2017b) and thus induce their collisional velocities, leading to suppres- sion of planetesimal formation through coagulation (e.g, Ormel & Cuzzi 2007; Brauer et al. 2008; Okuzumi & Hirose 2012). On the other hand, VSI-driven turbulence can produce both small short-lived and azimuthally large long-lived vortices (Richard et al. 2016; Manger & Klahr 2018; Flock et al. 2020; Pfeil & Klahr 2021; Melon Fuksman et al. 2024b). These vortices may lead to dust concentration and subsequent planetesimal for- mation through gravitational collapse (e.g., Barge & Sommeria 1995; Raettig et al. 2021; Lehmann & Lin 2022). Importantly, dust particles are the dominant opacity source and determine the local cooling rates of protoplanetary disks (Malygin et al. 2017; Barranco et al. 2018). Therefore, their spatial distribution controls where in the disks the VSI operates (Pfeil & Klahr 2019; Fukuhara et al. 2021) and can even affect the dust vertical diffusivity at the midplane (Pfeil & Klahr 2021; Fukuhara et al. 2023; Pfeil et al. 2023). The consequence of this dust\u2013VSI thermal interaction for dust settling remains unclear. In this paper, we model the above-mentioned interaction be- tween dust and the VSI to study how dust setting and diffu- sion balance in VSI-driven turbulence. To this end, we present a semi-analytic model that determines the vertical dust dis- tribution and the strength of VSI-driven turbulence in a self- consistent manner. The model uses an empirical formula for the vertical diffusion coefficient in VSI-driven turbulence obtained from our recent hydrodynamical simulations (Fukuhara et al. 2023). The formula returns the vertical diffusion coefficient as a function of the vertical profile of the cooling rate, which is determined by the vertical dust distribution. We use this model to search for an equilibrium vertical dust profile where settling balances with turbulent diffusion for a given grain size. This paper is organized as follows. In section 2, we de- scribe our self-consistent model. We present the main results in section 3 and discuss the implication of our study in section 4. Section 5 presents a summary.", + "main_content": "In this section, we describe our self-consistent model that determines the dust vertical distribution and VSI-driven turbulence intensity simultaneously (see figure 1 for an overview). We assume a protoplanetary disk consisting of gas (section 2.1) and dust (section 2.2). The dust grains dominate the disk\u2019s opacity (section 2.2) and their distribution determines the disk\u2019s local cooling rate distribution (section 2.3). Because the VSI requires rapid gas cooling, the cooling rate distribution in turn determines the location where the linear VSI operates (section 2.4), which we call the VSI-unstable layer1. The VSI produces turbulent gas motion propagating across the boundary between the VSI-unstable and stable regions. We calculate the VSI-driven vertical diffusion coefficient from the thicknesses of the VSIunstable and -stable layers using an empirical formula based on our previous hydrodynamical simulations (section 2.5). The vertical diffusion coefficients can be used to update the vertical dust distribution. Iterating these calculating steps, we determine the vertical dust distribution and the strength of VSI-driven turbulence in a self-consistent manner (section 2.6). In the following subsections, we describe the model in more detail. 2.1 Disk model We consider an axisymmetric disk around a solar-mass star. We adopt the cylindrical coordinate system (R, z), where R and z are the distance from the central star and the height from the midplane, respectively. The gas surface density is given by \u03a3g(R) = (2 \u2212\u03b2\u03a3) Mdisk 2\u03c0R2 c 2\u03c0R2 c \ufffdR Rc Rc \ufffd\u2212\u03b2\u03a3 exp \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0\u2212 \ufffdR Rc he gas onless Rc \ufffd2\u2212\u03b2\u03a3\uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb, (1) disk, Rc is the characnumber characterizing \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb where Mdisk is the total mass of the gas disk, Rc is the characteristic radius, and \u03b2\u03a3 is a dimensionless number characterizing the radial slope of the gas surface density. We fix the gas disk parameters to Mdisk = 0.01M\u2299, Rc = 100 au, and \u03b2\u03a3 = 1. Assuming that the disk is optically thick to stellar radiation and that the stellar luminosity is equal to the solar luminosity, the temperature of the disk is given by T(R) = T0 \ufffdR 1 a with T0 = 130 1 au \ufffdq , (2) K and q = \u22123/7 (Chiang & Goldreich 1997). We \ufffd \ufffd with T0 = 130 K and q = \u22123/7 (Chiang & Goldreich 1997). We assume that the disk is vertically isothermal, ignoring warmer surface layers that are optically thin to the starlight (Chiang & Goldreich 1997). From vertical hydrostatic equilibrium, the gas density is given by \u03c1g(R, z) = \u03a3g \u221a 2\u03c0H \u03a3g 2\u03c0Hg exp \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed\u2212z2 2H cs/\u2126K is the gas 2H2 g \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8. (3) cale height with cs and \u2126K being \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 where Hg = cs/\u2126K is the gas scale height with cs and \u2126K being 1 Fukuhara et al. (2021) called these the VSI zones. Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 3 disk model section 2.1 VSI criterion vertical profile of dust particles assumed vertical diffusion coefficient vertical profile of cooling time VSI-stable and -unstable layers section 2.4 empirical formula estimated vertical diffusion coefficient section 2.5 central star protoplanetary disk dust model section 2.2 grain size cooling model section 2.3 searching for equilibrium solution section 2.6 parameters section 2.7 cooling time long short VSI-unstable layer estimated VSI-driven turbulence intensity VSI-driven turbulence midplane dust particles midplane VSI-unstable VSI criterion single-sized power-law maximum grain size distance from the central star VSI-stable layer \u30fbdistance from the central star \u30fb(maximum) grain size two size-distribution models Fig. 1. Overview showing the self-consistent model determining the vertical dust profile and VSI-driven turbulence intensity in this study. The model considers a protoplanetary disk consisting of gas and dust (section 2.1). Assuming dust particle size and vertical diffusion coefficient \u03b1z,assume, we compute the vertical profile of dust particles for the single-sized model and power-law size distribution model (section 2.2). The dust vertical profile determines the vertical profile of the cooling timescale (section 2.3) and thereby localizes the region where the VSI operates (VSI-unstable layer), using the linear VSI criterion (section 2.4). Based on the empirical formula derived from the hydrodynamical simulations, we estimate the vertical diffusion coefficient of VSI-driven turbulence \u03b1z,VSI (section 2.5). Comparing estimated coefficient \u03b1z,VSI with assumed coefficient \u03b1z,assume, we search an equilibrium vertical dust profile where settling balances with turbulent diffusion for a given grain size (section 2.6). Our parameters are the grain size for the single-sized model or the maximum grain size for the power-law size distribution model, and the distance from the central star (section 2.7). the isothermal sound speed and Keplerian frequency, respectively. The isothermal sound speed is given by cs = pkBT/mg, where kB is the Boltzmann constant and mg is the mean molecular mass of the gas. The Keplerian frequency is given by \u2126K = p GM\u2217/R3, where G is the gravitational constant and M\u2217 is the mass of the central star. In this study, mg and M\u2217are taken to be 2.3mp and 1M\u2299, respectively, where mp is the proton mass. 2.2 Dust model We here describe the dust model that we use to calculate the cooling timescale (see section 2.3). The ratio between the dust surface density \u03a3d and \u03a3g is fixed to the interstellar dust abundance of 1%, whereas the local dust-to-gas ratio is allowed to vary with z considering dust settling. We consider two grain size-distribution models. The first model is the single-sized model where all dust particles are assumed to have equal size a1. Formally, the size distribution for the single-sized model can be written as dNd(a) da = 3\u03a3d 4\u03c0\u03c1inta3 1 \u03b4(a \u2212a1) (4) where dNd(a)/da is the number surface density per unit particle size a, \u03a3d is the total dust mass surface density, \u03c1int is the grains\u2019 internal density, and \u03b4 is the delta function. Equation (4) satisfies the normalization \u03a3d = Z md dNd(a) da da, (5) where md = (4\u03c0/3)\u03c1inta3 is the particle mass. The second model is the power-law model where the grain size distribution is given by dNd(a) da = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (12 + 3p)\u03a3d 4\u03c0\u03c1int \u0010 a4+p max \u2212a4+p min \u0011ap, amin < a < amax, 0, otherwise, (6) where p (, \u22124) is the slope of size distribution, and amin and amax are the minimum and maximum particle sizes, respectively. We fix p = \u22123.5 and amin = 1 \u00b5m. The maximum particle size serves as a free parameter in this study (see also section 2.7). Equation (6) also fulfills equation (5). 4 Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 Dust particles settle toward the midplane owing to stellar gravity and diffuse away from the midplane owing to turbulence. We assume that the turbulent diffusion coefficient is constant in the vertical direction. This approach is valid if the VSI is the dominant source of disk turbulence and determines dust vertical diffusion. Even if there is the linear-VSI stable layer near the midplane, VSI-driven turbulence can penetrate the stable midplane layer and thereby produce nearly constant turbulent vertical intensity in the vertical direction (Pfeil & Klahr 2021; Fukuhara et al. 2023). This situation can be realized when the stable midplane layer thickness is thinner than two gas scale heights, or the thickness of the unstable layer above the stable midplane layer is thicker than a few gas scale heights (Fukuhara et al. 2023). In this study, the results for all parameter ranges satisfy these conditions. We note that, however, the diffusion coefficient may not be uniform in the vertical direction when these conditions are broken. Assuming the balance between settling and diffusion, the vertical distribution of the particles can be written as (Takeuchi & Lin 2002) dnd(a,z) da = Cd(a)exp \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0\u2212z2 2H2 g \u2212Stmid(a) \u03b1z \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8edexp z2 2H2 g \u22121 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb, (7) where dnd(a, z)/da is the particle number density per unit radius at height z, Stmid(a) is the Stokes number of the particles at the midplane, \u03b1z is a dimensionless parameter that characterizes the level of the dust vertical diffusion caused by turbulence, and Cd(a) is the normalized constant determined by the condition dNd(a)/da = R (dnd(a, z)/da)dz. Because most dust particles lie at the region of z \u226aHg, the exponential factor in equation (7) can be approximated as exp[\u2212z2/(2H2 d)], yielding (Fukuhara et al. 2021) Cd(a) = 1 \u221a 2\u03c0Hd(a,\u03b1z) dNd(a) da . (8) Here, Hd(a,\u03b1z) is the scale height of particles with size a given by (Dubrulle et al. 1995; Youdin & Lithwick 2007) Hd(a,\u03b1z) = \" 1 + Stmid(a) \u03b1z #\u22121/2 Hg. (9) The Stokes number is the product of the stopping time and Keplerian frequency. Assuming that the particle radius is smaller than the mean free path of the disk gas molecules, gas drag onto the particles follows Epstein\u2019s law, which gives (see, e.g., Birnstiel et al. 2010) Stmid(a) = \u03c0\u03c1inta 2\u03a3g . (10) The vertical\u2013size distribution dnd(a, z)/da gives the collisional heat transfer and opacity, which control the cooling timescale, as a function of z. The mean travel length of gas molecules colliding with dust particles \u2113gd is given by \u2113gd = Z \u03c0a2 dnd da da !\u22121 . (11) This determines the timescale of collisional heat transfer (see section 2.3). To calculate the cooling time, we use the Plank mean opacity and Rosseland mean opacity defined by \u03baP = Z \u221e 0 \u03bag,\u03bbB\u03bb(T)d\u03bb Z \u221e 0 B\u03bb(T)d\u03bb , (12) and 1 \u03baR = Z \u221e 0 1 \u03bag,\u03bb \u2202B\u03bb(T) \u2202T d\u03bb Z \u221e 0 \u2202B\u03bb(T) \u2202T d\u03bb , (13) respectively, where \u03bag,\u03bb is the wavelength-dependent opacity per gas mass, \u03bb is the wavelength, and B\u03bb(T) is the Planck function. The opacity per gas mass is related to the size distribution of the dust particles as (e.g., Kondo et al. 2023) \u03bag,\u03bb = 1 \u03c1g Z \u03bad,\u03bb(a)md dnd da da, (14) where \u03bad,\u03bb(a) is the opacity per dust mass. Assuming that a dust particle is an uniform sphere, \u03bad,\u03bb(a) can be written as (Kataoka et al. 2014) \u03bad,\u03bb(a)= \u03c0a2 md \u00d7 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 24nkx \u0000n2 + 2\u00012 , x \u22641, min (8kx 3n \u0012 n3 \u2212 \u0010 n2 \u22121 \u00113/2\u0013 ,0.9 ) , x > 1, (15) where x = 2\u03c0a/\u03bb is the size parameter and n and k are the real and imaginary parts of the complex refractive index, respectively. We calculate n and k at each wavelength using the Bruggeman mixing rule, assuming that the dust is a mixture of silicate and ice with a mass mixing ratio of 1:1 with no porosity. The values of n and k for silicate and ice are taken from Draine (2003) and Warren & Brandt (2008), respectively. The internal density of the dust grains is then \u03c1int = 1.46 g cm\u22123. 2.3 Cooling model The spatial profile of the gas disk cooling rate depends on the size and spatial distribution of the dust particles (Malygin et al. 2017; Barranco et al. 2018; Fukuhara et al. 2021). Using the dust model described in section 2.2, we approximate the local cooling (thermal relaxation) timescale \u03c4relax as (Malygin et al. 2017; Pfeil & Klahr 2019) \u03c4relax(z) = max{\u03c4diff(z), \u03c4coll(z), \u03c4emit(z)}, (16) where \u03c4diff, \u03c4coll, and \u03c4emit are the timescales of radiative diffusion, collisional heat transfer, and radiative cooling, respectively. The cooling time in the optically thick regime is dominated by the radiative diffusion timescale \u03c4diff as (Malygin et al. 2017) Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 5 \u03c4diff = 1 \u00af Dk2 , (17) where \u00af D and k are the effective energy diffusion coefficient and the wavenumber of the perturbation, respectively. The effective energy diffusion coefficient is given by \u00af D = \u03bbrc \u03baR(T)\u03c1g 4\u03b7 1 + 3\u03b7 (18) where \u03bbr is the flux limiter, c is the light speed, and \u03b7 is the ratio between radiation energy density Er to combined radiation and internal energy density Eint. Here, \u03b7 is given by \u03b7=Er/(Er+Eint) with Er = 4\u03c3SBT 4/c and Eint = \u03c1gCVT, where \u03c3SB is the Stefan\u2013 Boltzmann constant and CV = 5kB/2mg is the specific heat at constant volume. The flux limiter is fixed to \u03bbr = 1/3, which is the value for the optically thick limit (Levermore & Pomraning 1981). For k, we assume that the thermal perturbation is determined by the wavenumber of the VSI-driven turbulence structure. The VSI unstable modes emerge when a radial perturbation wavenumber is larger than a vertical one (e.g., Arlt & Urpin 2004), and typically takes \u223c20/Hg by previous simulations of VSI-driven turbulence2 (see appendix of Pfeil & Klahr 2021). Therefore, we set k = 20/Hg. The timescale of collisional heat transfer is given by (Fukuhara et al. 2021) \u03c4coll = \u2113gd vth , (19) where vth is the mean relative velocity between the gas molecules and dust particles. The relative velocity vth can be approximated as the mean thermal speed of the molecules, vth = s 8kBT \u03c0mg . (20) We note that this collisional timescale is an approximation to the actual thermal accommodation timescale of the gas molecules and dust particles, which differs by a factor of \u03b3/(\u03b3 \u22121) (Burke & Hollenbach 1983; Barranco et al. 2018; Pfeil et al. 2023), where \u03b3 is the heat capacity ratio. The radiative cooling timescale \u03c4emit in the optically thin limit is given by (Malygin et al. 2017) \u03c4emit = CV 16\u03baP(T)\u03c3SBT 3 . (21) Both \u2113gd and \u03baP depend on the local size distribution of the dust particles (see section 2.2). 2.4 Defining the VSI-unstable layer The cooling timescale determines the VSI-unstable layer because rapid cooling reduces buoyancy preventing instability 2 Note that the radial wavenumber of the VSI-driven turbulence structure can depend on the density and opacity. The previous hydrodynamical simulations including radiative transport by Stoll & Kley (2014) have shown that an increase in the density leads to a smaller radial wavelength of turbulence structure. growth (e.g., Nelson et al. 2013). Following Lin & Youdin (2015), the criterion for the VSI can be expressed as \u03c4relax(z) \u2272\u03c4crit. (22) Here, \u03c4crit is the vertically global critical cooling timescale defined by \u03c4crit = Hg R |q| \u03b3 \u22121\u2126\u22121 K , (23) with \u03b3 = 1.4. By applying the VSI criterion of equation (22) to each point in the disk, we search for the VSI-unstable and stable layers. We apply this vertically global criterion instead of the local criterion [equation (4) in Lin & Youdin 2015]. This is because the intensity of VSI-driven turbulence is tightly correlated with the thicknesses of the VSI-unstable and -stable layers predicted by the global criterion (Fukuhara et al. 2023, see also section 2.5). The unstable and stable layers exist on the upper (z > 0) and lower (z < 0) halves of the disk, symmetrically concerning the midplane (z = 0, see the middle panel of figure 1). For the upper halves, we determine the height of the unstable layer\u2019s upper boundary zu and the height of the midplane stable layer\u2019s upper boundary zs. When the midplane stable layer is absent, we set zs = 0. Following Fukuhara et al. (2023), we also define the thicknesses of the linearly stable and unstable layers as \u2206Ls = 2zs and \u2206Lu = 2zu \u2212\u2206Ls, respectively (see figure 1). 2.5 Estimating dust vertical diffusion coefficient The thicknesses of the VSI-unstable and -stable layers can be used to determine the vertical diffusion of dust particles (Fukuhara et al. 2023). VSI-driven turbulence generally exhibits the vertical gas motion that is uniform in the vertical direction (e.g., Pfeil & Klahr 2021; Fukuhara et al. 2023). Therefore, we estimate the dimensionless dust vertical diffusion coefficient of VSI-driven turbulence \u03b1z,VSI as \u03b1z,VSI = \u27e8v2 z\u27e9 c2 s \u00b7 \u03c4corr\u2126K, (24) where \u27e8v2 z\u27e9is the time-averaged squared gas vertical velocity at the midplane and \u03c4corr is the correlation time of VSI-driven turbulence. The correlation time of VSI-driven turbulence is still indeterminate, ranging from \u03c4corr\u2126K \u223c0.2 to \u223c20 (Stoll & Kley 2016; Flock et al. 2020, see also section 4.1 of Fukuhara et al. 2023). In this study, we fix \u03c4corr\u2126K = 1.0. Global axisymmetric simulations by Fukuhara et al. (2023) show that the mean squared gas vertical velocity \u27e8v2 z\u27e9in VSIdriven turbulence is tightly correlated with the thicknesses of the VSI-unstable and -stable layers. We estimate \u27e8v2 z\u27e9from \u2206Lu and \u2206Ls, using an empirical formula provided by Fukuhara et al. (2023), \u27e8v2 z\u27e9 c2 s = fT(\u2206Lu, \u2206Ls) + fpT(\u2206Lu, \u2206Ls), (25) 6 Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 Fig. 2. Dimensionless dust vertical diffusion coefficient of VSI-driven turbulence \u03b1z,VSI [equation (24)] as a function of \u2206Lu and \u2206Ls. The dashed lines show \u03b1z,VSI = 10\u22123, 10\u22124, and 10\u22125. where fT and fpT represent the dependence of \u2206Lu and \u2206Ls on \u27e8v2 z\u27e9. For fT and fpT, we use equations (17) and (19) of Fukuhara et al. (2023), which represent the sharp decrease of \u27e8v2 z\u27e9in \u2206Lu \u2272 2Hg and \u2206Ls \u22732Hg. In figure 2, We plot \u03b1z,VSI [equation (24)] as a function of \u2206Lu and \u2206Ls. When \u2206Lu \u22732Hg and \u2206Ls \u2272 2Hg, equation (24) predicts \u03b1z,VSI \u22482 \u00d7 10\u22123; otherwise, \u03b1z,VSI approaches zero. 2.6 Calculation procedure The vertical dust distribution depends on the turbulent diffusion coefficient \u03b1z [equation (7)]. On the other hand, if we assume that the VSI is the main driver of disk turbulence, its diffusion strength \u03b1z,VSI depends on the disk\u2019s cooling structure [equation (24)] and in turn on vertical dust distribution. In general, \u03b1z,VSI produced by VSI-driven turbulence under a given vertical dust distribution does not necessarily match \u03b1z required to maintain the dust distribution. We search for an equilibrium state where the two turbulence strengths match in the following steps. 1. For a given grain size distribution and radial position R, use equation (7) to generate vertical dust profiles for various values of trial turbulence strengths \u03b1z = \u03b1z,assume. The trial values are generated by dividing the range 10\u22128 \u2264\u03b1z,assume \u2264 10\u22122 into grids of an equal logarithmic interval of 0.05. 2. For each value of \u03b1z,assume, use the cooling model [equation (16)], VSI criterion [equation (22)], and the empirical formula for VSI-driven turbulence strength [equation (24)] to evaluate \u03b1z,VSI. 3. Interpolate \u03b1z,VSI as a smooth function of \u03b1z,assume, and search for self-consistent equilibrium solutions where the predicted VSI-driven turbulence strength \u03b1z,VSI equals the assume turbulence strength \u03b1z,assume. Below, we denote the vertical diffusion coefficient for an equilibrium solution by \u03b1z,equi. If \u03b1z,VSI < \u03b1z,assume for all values of \u03b1z,assume, VSIdriven turbulence cannot be strong enough to sustain the assumed dust vertical distribution, meaning that it cannot stop grain settling. We call this runaway dust settling. For every set of grain size and R, we repeat this procedure and search for equilibrium solutions. We ignore that VSI-driven turbulence revives before the dust settles toward the midplane in the runaway fashion. This assumption is valid if the dust-settling timescale is longer than the growth timescale of turbulence. The settling timescale can be estimated as \u223cz/|vz| \u2248St\u22121 mid\u2126\u22121 K (e.g., Dubrulle et al. 1995; Youdin & Lithwick 2007), where vz \u2248\u2212Stmid\u2126Kz is the vertical velocity of dust particles for the terminal velocity approximation. The typical growth timescale of VSI-driven turbulence is \u223c102\u2126\u22121 K (e.g., Nelson et al. 2013), which is longer than the settling timescale for large dust grains with Stmid \u227310\u22122. However, for small grains with Stmid \u227210\u22122, the growth timescale is shorter than the settling timescale, suggesting re-development of VSI-driven turbulence. In this case, weak turbulence may operate before the dust settles. 2.7 Parameter choices Our model involves two main parameters: the size of dust grains a1 for the single-sized model or the maximum size of dust grains amax for the power-law size distribution model, and the distance from the central star R. We divide the parameter ranges 1 \u00b5m < a < 1 cm or 1 \u00b5m < amax < 1 cm and 1 au < R < 100 au into grids of an equal logarithmic interval of 0.1. 3 Results In this section, we use the model described in section 2 to search for equilibrium solutions where dust settling balances with VSIdriven turbulent diffusion. In section 3.1, we describe the properties of the equilibrium solutions and their dependence on the dust grain size and radial distance for the single-sized model. In section 3.2, we show the equilibrium solutions for the powerlaw size distribution model. 3.1 Equilibrium solutions for the single-sized model First, we use the single-sized model at R = 10 au to illustrate the properties of the equilibrium solutions. Following the procedure described in section 2.6, we search for the equilibrium vertical diffusion coefficient \u03b1z,equi for different values of single grain size a1. The upper panel of figure 3 shows \u03b1z,equi for each a1. We find that when a1 \u227210 \u00b5m, there exist two equilibrium solutions. The equilibrium solutions vanish when the grain size exceeds \u223c10 \u00b5m. We also plot in the lower panel of figure 3 the dust scale height with equilibrium solutions defined Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 7 \u03b1z = 2 \u00d7 10\u22123 \u03b1z, assume < \u03b1z, VSI \u03b1z, assume > \u03b1z, VSI no equilibrium solution \u03b1z, assume > \u03b1z, VSI for any \u03b1z, assume 10 -4 10 -3 10 -2 10 -1 10 0 a1 [cm] 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 \u03b1z, equi stable equilibrium solution unstable equilibrium solution diffusion settling runaway settling 10 -4 10 -3 10 -2 10 -1 10 0 a1 [cm] 10 -2 10 -1 10 0 Hd, equi(a1)/Hg 10-2 10-3 10-4 10-5 10-6 10-7 R = 10 au Fig. 3. Equilibrium vertical diffusion coefficient \u03b1z.equi (upper panel) and its dust scale height (lower panel), for different values of dust grain\u2019s size at R = 10 au in the single-sized model. The symbols indicate stable (circles) and unstable (crosses) equilibrium solutions. The dashed lines show a constant value with \u03b1z = 2 \u00d7 10\u22123. The dotted lines in the lower panel represent dust scale height with \u03b1z = 10\u22122, 10\u22123, 10\u22124, 10\u22125, 10\u22126, and 10\u22127. by Hd,equi(a) \u2261Hd(a,\u03b1z,equi) as a function of a1. Figure 4 illustrates two cases at R = 10 au with a1 = 10 \u00b5m and 100 \u00b5m, representing the cases with two and no equilibrium solutions, respectively. The upper panel plots \u03b1z,VSI as a function of \u03b1z,assume. For a1 = 10 \u00b5m, \u03b1z,VSI drops sharply from \u22482\u00d710\u22123 to \u226a10\u22123 as \u03b1z,assume falls below \u223c10\u22124. A decrease in \u03b1z,assume leads to dust depletion at high altitudes, yielding a decrease in the thickness of the VSI-unstable layer. This decrease in the unstable layer\u2019s thickness results in a strong suppression of VSI-driven turbulence around \u2206Lu \u22482Hg (see figure 2). For a1 = 100 \u00b5m, \u03b1z,VSI takes lower values of \u227210\u221210. This is because the VSI-unstable layer is thin (\u2206Lu \u22722Hg) for any values of \u03b1z,assume due to dust vertical settling, resulting in suppression of VSI-driven turbulence. Equilibrium solutions correspond to the points where \u03b1z,VSI = \u03b1z,assume. For a1 = 10 \u00b5m, two equilibrium solutions exist at \u03b1z,equi \u22482\u00d710\u22123 and 2\u00d710\u22125. For a1 = 100 \u00b5m, no equilibrium solution exists because \u03b1z,VSI < \u03b1z,assume for all values of \u03b1z,assume. This implies that the vertical diffusion by VSI-driven turbulence alone cannot sustain dust vertical distribution of any scale height in the latter case. To understand the results presented in the upper panel of figure 3 in terms of the competition between vertical settling and turbulent diffusion, we consider a time-dependent toy model where the vertical distribution of dust grains evolves through settling and diffusion. The equation that describes the evolution of the squared mean of the grains\u2019 vertical positions, \u27e8z2\u27e9, is given by (for a derivation, see appendix 1) 1 2 d\u27e8z2\u27e9 dt = \u2212Stmid\u2126K\u27e8z2\u27e9+ Dz(\u03b1z,VSI) \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed1 \u2212\u27e8z2\u27e9 H2 g \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8, (26) where Dz(\u03b1z) = \u03b1zcsHg is the dust vertical diffusion coefficient. Below we simply call \u27e8z2\u27e91/2 the dust scale height. In the righthand side of equation (26), the first term represents settling toward the midplane at the terminal vertical velocity \u2212Stmid\u2126Kz. The second term corresponds to the vertical diffusion by VSIdriven turbulence in a disk of gas scale height Hg, with the factor (1 \u2212\u27e8z2\u27e9/H2 g) guaranteeing that the dust scale height never exceeds Hg in the limit of strong diffusion (Ciesla 2010). It can be shown from equation (9) that our single-sized model assumes \u27e8z2\u27e9= \u0002Hd(a,\u03b1z,assume)\u00032 = \" 1 + Stmid(a) \u03b1z,assume #\u22121 H2 g (27) for an arbitrary value of \u03b1z,assume. Substituting this into equation (26), we find that d\u27e8z2\u27e9/dt = 0 if \u03b1z,assume = \u03b1z,VSI, meaning that the dust settling and diffusion indeed balance in the equilibrium solutions defined in section 2.6. This toy model also allows us to predict what would occur when \u03b1z,assume and \u03b1z,VSI are unequal. For example, if \u03b1z,assume > \u03b1z,VSI (i.e., if VSI-driven turbulence is not strong enough to sustain the vertical dust distribution), equation (26) shows that settling dominates over diffusion and the dust scale height decreases. We now use equation (26) to explain why either two or no equilibrium solutions emerge depending on the value of a1. The lower panel of figure 4 shows d\u27e8z2\u27e9/dt as a function of \u27e8z2\u27e9for a1 = 10 and 100 \u00b5m. In general, there exists one equilibrium solution for a constant value of Dz because d\u27e8z2\u27e9/dt is then a monotonically decreasing function of \u27e8z2\u27e9, i.e., diffusion (settling) tends to dominate at small (large) dust scale heights. This is illustrated by the dashed curves, which show d\u27e8z2\u27e9/dt in the case where turbulence with \u03b1z,VSI \u22482\u00d710\u22123 is present independently of \u27e8z2\u27e9. Therefore, multiple or no equilibrium solution occurs only when Dz is a function of \u27e8z2\u27e9. For a1 = 10 \u00b5m, the two equilibrium solutions have dust scale heights of \u27e8z2\u27e91/2 \u223cHg and 0.3Hg. Comparison with the dashed curve shows that the solution with higher \u27e8z2\u27e91/2 corresponds to the equilibrium solution for constant diffusion coefficient with \u03b1z,VSI \u22482\u00d710\u22123. The solution with lower \u27e8z2\u27e91/2 emerges because VSI-driven turbulent diffusion is suppressed for small dust scale heights, i.e., for small \u03b1z,assume, as mentioned earlier. For a1 = 100 \u00b5m, d\u27e8z2\u27e9/dt 8 Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 stable unstable \u03b1z, VSI = \u03b1z, assume no equilibrium solution 10 -8 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 \u03b1z, assume 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 \u03b1z, VSI a1 = 10 \u00b5m a1 = 100 \u00b5m stable unstable diffusion settling no equilibrium solution 10 -2 10 -1 10 0 \u00ad z2\u00ae /H 2 g = [Hd(a1, \u03b1z, assume)]2/H 2 g -10 0 -10 -1 -10 -2 -10 -3 -10 -4 0 10 -4 10 -3 10 -2 10 -1 10 0 \u00a1 d \u00ad z2\u00ae /dt \u00a2 /2 \u00ad z2\u00ae [\u2126K] a1 = 10 \u00b5m a1 = 100 \u00b5m a1 = 10 \u00b5m with \u03b1z = 2 \u00d7 10\u22123 a1 = 100 \u00b5m with \u03b1z = 2 \u00d7 10\u22123 R = 10 au Fig. 4. Upper panel: VSI-driven vertical diffusion coefficient \u03b1z,VSI at R = 10 au as a function of \u03b1z,assume from the single-sized model with a1 = 10 \u00b5m and 100 \u00b5m. The dotted line represents \u03b1z,VSI = \u03b1z,assume. The circle and cross symbols mark stable and unstable equilibrium solutions, respectively. Lower panel: d\u27e8z2\u27e9/dt as a function of \u27e8z2\u27e9for the two cases presented in the upper panel. The dashed lines show cases of a constant vertical diffusion coefficient as \u03b1z = 2 \u00d7 10\u22123. is negative for all dust scale heights, indicating runaway dust settling. Equation (26) can also be used to predict whether the equilibrium solutions are stable or not against a small perturbation in the dust scale height. In general, equilibrium solutions can be classified into stable and unstable ones. In this context, stable (unstable) solutions are the ones for which d\u27e8z2\u27e9/dt becomes negative (positive) as we slightly increase \u27e8z2\u27e9from the equilibrium value, so that the perturbation decays (diverges). In other words, stable (unstable) solutions have a negative (positive) slope of d\u27e8z2\u27e9/dt as a function of \u27e8z2\u27e9. In the case of a1 = 10 \u00b5m, the equilibrium solution with the larger \u27e8z2\u27e9is stable, whereas that with the smaller \u27e8z2\u27e9is unstable (see figure 4). Next, we describe how the stable equilibrium solution varies with radial distance R. Figure 5 shows \u03b1z,equi for the stable solution as a function of R and a1 for the single-sized model. Fig. 5. Vertical diffusion coefficients \u03b1z,equi for the stable equilibrium solution from the single-sized model as a function of radial distance R and grain size a1. The black area indicates the parameter space where no equilibrium solution exists. In this area, \u03b1z,VSI is lower than \u03b1z,assume for all \u03b1z,assume, implying runaway dust settling. The dotted lines mark, from top to bottom, Stmid(a1) = 10\u22121, 10\u22122, 10\u22123, and 10\u22124. 10 0 10 1 10 2 R [au] 0 1 2 3 z/Hg a1 = 10 \u00b5m \u03b1z, assume = 2 \u00d7 10\u22123 single-sized model unstable layer stable layer \u2206Lu/2 \u2206Ls/2 \u2206Ls 2Hg \u2206Lu 2Hg \u2206Ls 2Hg, \u2206Lu 2Hg no equilibrium solution stable equilibrium solution no equilibrium solution zu zs Fig. 6. Radial profiles of the height of the unstable layer\u2019s upper boundary (circles), zu, and the height of the midplane stable layer\u2019s upper boundary (crosses), zs, for a1 = 10 \u00b5m with \u03b1z,assume = 2 \u00d7 10\u22123 from the single-sized model. The regions of zs < z < zu and 0 < z < zs indicate the unstable and midplane stable layers, respectively. The left and right vertical dashed lines mark \u2206Ls(\u22612zs) \u22482Hg and \u2206Lu(\u22612zu \u2212\u2206Ls) \u22482Hg, respectively. The black area in the R\u2013a1 plane indicates the parameter space where no equilibrium solution exists and runaway dust settling is expected. The figure shows that equilibrium solutions only exist beyond R \u223c3 au. At R \u22723 au, the midplane region is significantly optically thick to its own thermal emission, yielding a thick VSI-stable layer (\u2206Ls \u22732Hg) around the midplane. We map in figure 6 the unstable and stable layers for a1 = 10 \u00b5m with \u03b1z,assume = 2 \u00d7 10\u22123 from the single-sized model at the R\u2013z plane. This figure indicates that the thickness of the stable midplane layer increases sharply as R decreases from \u223c3 au. This thick stable layer prevents VSI-driven turbulence from developing (\u03b1z,VSI \u226a10\u22123; see figure 2), resulting in runaway dust setPublications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 9 10 0 10 1 10 2 R [au] 0.0 0.2 0.4 0.6 0.8 1.0 Hd, equi(a1)/Hg single-sized model a1 = 10 \u00b5m a1 = 100 \u00b5m a1 = 1 mm a1 = 1 cm Fig. 7. Radial profiles of the dust scale height from the stable equilibrium solution for the single-sized dust model with different values of a1. The dotted lines show the dust scale height for the fixed vertical diffusion coefficient of \u03b1z = 2 \u00d7 10\u22123. tling (\u03b1z,VSI < \u03b1z,assume for all values of \u03b1z,assume). For the cases of large dust, optical depth decreases and the VSI-stable layer becomes thin. However, because the dust settles vertically, the VSI-unstable layer is also thin (\u2206Lu \u22722Hg), leading to suppression of VSI-driven turbulence. The stable equilibrium solutions are generally accompanied by fully developed VSI-driven turbulence with \u03b1z,VSI \u22482 \u00d7 10\u22123. This is reasonable because any vulnerable turbulence whose strength decreases steeply with decreasing dust scale height would lead to an unstable solution. At R \u22733 au, equilibrium solutions exist for sufficiently small a1 as we already showed in the particular case of R = 10 au. The maximum grain size for equilibrium decreases with increasing R. This is because the dust number density decreases as R increases: decreasing dust density leads to a longer cooling time, which makes the VSI-unstable layers vertically thinner (Malygin et al. 2017; Pfeil & Klahr 2019; Fukuhara et al. 2021). Figure 7 shows the radial profiles of the dust scale height from the stable equilibrium solutions, Hd,equi(a1), for different values of a1. We set Hd,equi(a1) to zero if runaway settling occurs. As mentioned above, the stable solutions are generally accompanied by fully developed VSI-driven turbulence with \u03b1z,equi \u22482 \u00d7 10\u22123. For reference, the dotted lines in the figure show the dust scale height for \u03b1z,equi = 2 \u00d7 10\u22123. Because the stable solutions require small a1, they always lead to a thick dust disk with Hd,equi(a1) \u2248Hg. Because the dust scale height is a function of Stmid(a1), we can expect that the value of Stmid(a1) rather than a1 more directly determines whether there exist equilibrium solutions or not. To confirm this expectation, we overplot in figure 5 contours of constant Stmid(a1). We find that the equilibrium solution at R \u22733 au vanishes for Stmid(a1) \u22733 \u00d7 10\u22124. Fig. 8. Same as figure 5, but from the power-law size distribution model. The dotted lines mark, from top to bottom, Stmid(amax) = 10\u22121, 10\u22122, 10\u22123, and 10\u22124. 3.2 Equilibrium solutions for the power-law size distribution model The grain size distribution affects the cooling timescale profile. Of the three timescales determining the cooling timescale (see section 2.3), the timescales of collisional heat transfer \u03c4coll and radiative diffusion \u03c4diff determine the thicknesses of the unstable and midplane stable layers, respectively. For the power-law size distribution with a slope of p = \u22123.5, the largest grains dominate the dust mass budget, whereas the smallest grains dominate the total geometric cross-section and thereby control collisional heat transfer. Therefore, even if large grains exist, small grains can still sustain the unstable layer. Figure 8 illustrates how the small grains in the power-law size distribution model extend the parameter space where strong VSI-driven turbulence is sustained. This figure shows \u03b1z,equi for the stable equilibrium solutions as a function of R and amax for the power-law size distribution model. At R=10 au, the equilibrium solutions exist up to amax = 100 \u00b5m. This is in contrast to the single-sized model, which only allows equilibrium solutions up to 10 \u00b5m at the same radial position (see figure 5). In terms of the Stokes number, the power-law size distribution model allows the equilibrium solutions up to Stmid(amax) \u224810\u22123\u201310\u22122 depending on R. As in the single-sized model, the stable solution is accompanied by fully developed VSI-driven turbulence with \u03b1z,equi \u22482 \u00d7 10\u22123. At R \u22723 au or at sufficiently large amax, the equilibrium solution vanishes and runaway settling (\u03b1z,assume > \u03b1z,VSI) occurs for the same reason as in the singlesized model. Figure 9 plots the dust scale height for the stable equilibrium solution as a function of R from the power-law size distribution model with different values of amax. As in figure 7, we set Hd,equi = 0 for runaway settling. For amax = 10 \u00b5m, VSIdriven turbulence maintains a thick dust layer of Hd,equi \u22730.8Hg at 2 au \u2272R \u227240 au. As amax increases, the region with the sta10 Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 10 0 10 1 10 2 R [au] 0.0 0.2 0.4 0.6 0.8 1.0 Hd, equi(amax)/Hg power-law size distribution model amax = 10 \u00b5m amax = 100 \u00b5m amax = 1 mm amax = 1 cm Fig. 9. Same as figure 7, but from the power-law size distribution model. ble equilibrium solution shrinks, and the equilibrium dust scale height within that region decreases because the settling velocity increases while keeping \u03b1z,equi \u22482\u00d710\u22123. For amax > 1 mm, the equilibrium solution vanishes at all R. 4 Discussion 4.1 Implications for disk observations In section 3, we have shown that the dust grain size and radial distance determine whether the VSI can sustain high dust diffusion. VSI-driven turbulence can maintain a thick dust layer of Hd \u2248Hg within an annular region with a moderate optical depth and small grains. However, when both the dust size and radial distance are large, VSI-driven turbulence is suppressed, resulting in runaway dust settling leading to Hd \u22480. These differences in dust settling levels may provide an explanation for varying degrees of dust settling as inferred from recent radio interferometric observations of some protoplanetary disks. The disks around HL Tau and Oph 163131 exhibit well-defined dust gaps, limiting the dust scale height to \u22720.1Hg at R \u227310 au (Pinte et al. 2016) and at R \u2248100 au (Villenave et al. 2022), respectively. These estimated dust scale heights are in line with runaway dust settling in the outer disk regions predicted in this study. In contrast, the disk around HD 163296 has two major dust rings at 70 and 100 au that show high and low degrees of dust settling with Hd/Hg \u22730.8 and \u22720.1, respectively (Doi & Kataoka 2021). This difference in dust diffusion by radial distance is consistent with the trend in the radial profile of dust scale heights predicted by our model (see figures 7 and 9). Therefore, we hypothesize that VSI-driven turbulence dominates vertical dust diffusion in these disks, and that this turbulence is suppressed in the outer disk regions. Testing this hypothesis requires detailed modeling of gas, dust, and cooling rate profiles in these disks. 4.2 Implications for disk and dust evolution In section 3, we have shown that there exists a parameter space where VSI-driven turbulence is too weak to stop dust settling. This occurs either when the VSI-stable layer at the midplane is too thick or when the VSI-unstable layer is too thin for the VSI to operate. The former condition is met at small orbital radii (R \u22723 au), whereas the latter condition is met when the grains grow beyond a certain size. This runaway settling is beneficial for dust growth and planetesimal formation. A high degree of dust settling generally promotes planetesimal formation through the streaming and gravitational instabilities (e.g., Sekiya 1998; Youdin & Shu 2002; Johansen et al. 2009; Gole et al. 2020; Umurhan et al. 2020; Chen & Lin 2020). Weak turbulence leading to high dust concentration is also conducive to dust growth through coagulation without collisional fragmentation and erosion (e.g., Brauer et al. 2008; Okuzumi & Hirose 2012). Figures 7 and 9 imply that strong dust diffusion by the VSI tends to occur in an annular region where the optical depth is moderate (see also figure 6). The vertically extended dust in this annulus may block the radiation of the central star and thereby cast a shadow beyond the annulus (e.g., Dullemond et al. 2001; Dullemond & Dominik 2004). The shadowing generally results in a significant drop in temperature and therefore can affect the chemical evolution in that region (Ohno & Ueda 2021; Notsu et al. 2022). The inner and outer edges of the VSI-turbulence zone also serve as potential sites for planetesimal formation. A steep radial change in turbulence viscosity at the edges may trigger the Rossby wave instability (Lovelace et al. 1999; Li et al. 2000, 2001) and create a long-lived vortex, promoting dust concentration and subsequent planetesimal formation through gravitational collapse (e.g., Barge & Sommeria 1995). The sharp drop in turbulent diffusivity at the inner edge of the VSI-driven turbulence zone can also trigger a runaway pile-up of dust grains (Hyodo et al. 2021, 2022) because the increase of dust-to-gas ratio at the midplane reduces the radial drift velocity of dust. Moreover, if the turbulent viscosity caused by the VSI dominates gas disk accretion, the outer edge of VSI-driven turbulence may create a local maximum in the radial profile of the gas pressure because the turbulent viscosity decreases sharply toward the outside. This is similar to the mechanism by which a pressure maximum is generated near the dead-zone inner edge of MRI (e.g., Dzyurkevich et al. 2010; Flock et al. 2016, 2017a). The pressure maximum can trap dust particles (Whipple 1972; Adachi et al. 1976; Weidenschilling 1977). All these dust concentration processes could lead to planetesimal formation via the streaming and the gravitational instabilities (e.g., Youdin & Goodman 2005; Johansen & Youdin 2007; Johansen et al. 2009; Carrera et al. 2015; Yang et al. 2017). We plan to quantify the impact of these dust concentration processes on planetesimal Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 11 formation by conducting hydrodynamical simulations around the edge of the VSI-driven turbulence zone. 4.3 Effects of minimum grain size, grain size distribution, and disk mass The results presented in section 3.2 depend on the minimum grain size and slope of the size distribution because the small grains dominate the cooling rate. In this study, we have fixed the minimum size of grains and slope of grain size distribution to amin = 1 \u00b5m and p = \u22123.5, respectively. Smaller dust grows quickly through Brownian motion (Birnstiel et al. 2011), but its size limit is uncertain, approximately ranging from 0.1 to 1 \u00b5m. As amin decreases, the cooling timescale decreases, leading to a vertically more extended VSI-unstable layer. The grain size distribution limited by the radial drift can also become significantly steeper (e.g., Birnstiel et al. 2011; Stammler & Birnstiel 2022; Birnstiel 2023), with p reaching approximately \u22122.5. As p increases, cooling would be less efficient because the number of the smallest grains is smaller, leading to more suppressed the VSI. Furthermore, variations in the disk mass can alter the cooling rate profile. In this study, we have fixed the disk mass to Mdisk = 0.01M\u2299. The disk mass can depend on disk age (Cazzoletti et al. 2019; Testi et al. 2022); in particular, the disk mass of the young disk around HL Tau can be estimated as Mdisk \u223c0.1M\u2299(Kwon et al. 2015). The massive disk would extend the parameter space where the equilibrium solutions exist because the larger disk mass can make cooling more efficient, leading to the radial and vertical expansion of the VSI-unstable layers (see section 4.3 and figure 10 of Fukuhara et al. 2021). However, as the disk mass increases, the region with no equilibrium solutions due to the thick stable layer around the midplane, corresponding to the regions with R \u22723 au for figures 5 and 8, would be more extended to larger radial distance. This is because as the dust density increases, optical depth increases, leading to inefficient cooling. 4.4 Limitations of the model So far we have assumed the local model in the radial direction, meaning that gas and dust remain stationary radially. However, VSI-driven turbulence can diffuse dust radially, altering its spatial distribution (Stoll & Kley 2016; Flock et al. 2020; Dullemond et al. 2022). This effect implies that the edges of the VSI-driven turbulence zone evolve with dust. Quantifying this effect should be studied in hydrodynamical simulations that include the dynamic and thermal coupling between gas and dust. In the region where dust settles in a runaway fashion, other mechanisms that drive disk turbulence may also contribute to the vertical diffusion of dust grains. The MRI can drive gas turbulence in both the inner and outer regions of protoplanetary disks because of the high ionization of gas (for reviews, Turner et al. 2014; Lesur et al. 2023). The streaming instability caused by strong dust settling can also trigger turbulent gas motion (Johansen & Youdin 2007; Yang & Zhu 2021). Turbulence driven by these mechanisms can maintain a vertical dust distribution that is balanced between turbulent diffusion and settling. Moreover, the dust vertical diffusion due to these other mechanisms may revive VSI-driven turbulence through changes in the vertical profiles of dust particles and the cooling rate. To understand this, we should investigate which mechanism dominates turbulence generation in each region of protoplanetary disks. Furthermore, this study ignores the effects of dust, magnetic field, and vertical thermal structure. Dust would increase the effective buoyancy frequency of the gas (Lin & Youdin 2017) that prevents the growth of the linear VSI. The gas\u2013dust drag force can also suppress the VSI and dust vertical diffusion (Lin 2019; Lehmann & Lin 2022, 2023). Moreover, magnetic fields threading the global disk may suppress the VSI either directly through magnetic tension or indirectly through MRI turbulence (Nelson et al. 2013; Latter & Papaloizou 2018; Cui & Bai 2020). The roles of magnetic fields in the VSI suppression can be positive or negative depending on non-ideal magnetohydrodynamical effects (ambipolar diffusion, Ohmic resistivity, and Hall effect; Cui & Bai 2020, 2022; Cui & Lin 2021; Latter & Kunz 2022). Additionally, Zhang et al. (2024) recently found that vertical thermal stratification with a colder interior and a hotter surface can suppress VSI-driven turbulence around the midplane. They may change the levels of the equilibrium vertical dust profile. 5 Summary We have searched for the equilibrium vertical dust profile where settling balances with diffusion caused by VSI-driven turbulence. We construct the semi-analytic model that determines the vertical profile of dust grains and the intensity of VSI-driven turbulence in a self-consistent manner (figure 1). Our key findings are summarized as follows. 1. We find that there exist equilibrium solutions where dust settling balances with VSI-driven turbulent diffusion for small grains (figure 3). If we assume that all grains have equal size, there exist two equilibrium solutions when the single grain size is smaller than 10 \u00b5m at 10 au. If the grain size exceeds \u223c10 \u00b5m, the equilibrium solutions vanish. 2. For the cases of small grains, two equilibrium solutions are classified into stable and unstable ones [equation (26) and figure 4]. The stable ones correspond to the dust scale height of \u03b1z,equi \u22482 \u00d7 10\u22123, where \u03b1z,equi is the equilibrium dimensionless vertical diffusion coefficient. For the cases of large grains, no equilibrium solutions indicate runaway dust settling. 3. The existence of the equilibrium solutions depends on the 12 Publications of the Astronomical Society of Japan, (2014), Vol. 00, No. 0 radial distance R as well as dust size (figure 5). The equilibrium solutions only exist beyond R \u223c3 au because the midplane region at R \u22723 au is optically thick, yielding a thick VSI-stable layer (figure 6). The maximum grain size that allows for the equilibrium solutions also decreases with increasing R. Because the equilibrium solutions require a small grain size, they lead to a thick dust disk with Hd,equi \u2248 0.8Hg (figure 7), where Hd,equi is the dust scale height with the equilibrium solutions. 4. If the particle size distribution is assumed to follow a power law, the small grains extend the parameter space where strong VSI-driven turbulence is sustained (figure 8). At 10 au, the equilibrium solutions exist up to amax = 100 \u00b5m, where amax is the maximum size of dust grains. For amax = 10 \u00b5m in the entire disk, VSI-driven turbulence maintains a thick dust layer of Hd,equi \u22730.8Hg at 2 au \u2272R \u227250 au (figure 9). Our results suggest that dust diffusion by VSI-driven turbulence has different levels depending on the radial distance. This variation may explain the different degrees of dust settling inferred from observations of some protoplanetary disks. This implies that VSI-driven turbulence plays a dominant role in vertical dust diffusion within these disks. Testing this hypothesis requires a more quantitative investigation of these disks\u2019 cooling rate structure. Acknowledgments We thank Tomohiro Ono for discussions that motivated this project. We also thank Akimasa Kataoka, Kiyoaki Doi, Hidekazu Tanaka, Mario Flock, and Takahiro Ueda for the useful discussions of applications to observed protoplanetary disks. We appreciate the anonymous referee for comments that greatly helped improve the manuscript. This work was supported by JSPS KAKENHI Grant Numbers JP20H01948, JP20H00182, JP22KJ1337, JP23H01227, and JP23K25923." + }, + { + "url": "http://arxiv.org/abs/2404.09425v1", + "title": "Super-resolution of biomedical volumes with 2D supervision", + "abstract": "Volumetric biomedical microscopy has the potential to increase the diagnostic\ninformation extracted from clinical tissue specimens and improve the diagnostic\naccuracy of both human pathologists and computational pathology models.\nUnfortunately, barriers to integrating 3-dimensional (3D) volumetric microscopy\ninto clinical medicine include long imaging times, poor depth / z-axis\nresolution, and an insufficient amount of high-quality volumetric data.\nLeveraging the abundance of high-resolution 2D microscopy data, we introduce\nmasked slice diffusion for super-resolution (MSDSR), which exploits the\ninherent equivalence in the data-generating distribution across all spatial\ndimensions of biological specimens. This intrinsic characteristic allows for\nsuper-resolution models trained on high-resolution images from one plane (e.g.,\nXY) to effectively generalize to others (XZ, YZ), overcoming the traditional\ndependency on orientation. We focus on the application of MSDSR to stimulated\nRaman histology (SRH), an optical imaging modality for biological specimen\nanalysis and intraoperative diagnosis, characterized by its rapid acquisition\nof high-resolution 2D images but slow and costly optical z-sectioning. To\nevaluate MSDSR's efficacy, we introduce a new performance metric, SliceFID, and\ndemonstrate MSDSR's superior performance over baseline models through extensive\nevaluations. Our findings reveal that MSDSR not only significantly enhances the\nquality and resolution of 3D volumetric data, but also addresses major\nobstacles hindering the broader application of 3D volumetric microscopy in\nclinical diagnostics and biomedical research.", + "authors": "Cheng Jiang, Alexander Gedeon, Yiwei Lyu, Eric Landgraf, Yufeng Zhang, Xinhai Hou, Akhil Kondepudi, Asadur Chowdury, Honglak Lee, Todd Hollon", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Biomedical microscopy is an essential imaging method and diagnostic modality in clinical medicine and biomed- ical research. Digital pathology and whole-slide images (WSIs) are now ubiquitous in computational pathology, leading to an increased role of computer vision and ma- XY Low resolution volume Super-resolved volume YZ XZ XY YZ XZ Low resolution cross sections High resolution cross sections Figure 1. Super-resolution of biomedical volumes with 2D su- pervision. Volumetric microscopy images have a data distribu- tion agnostic to tissue orientation and spatial dimension. Here, we present a method for leveraging this intrinsic characteristic by super-resolving low-resolution volumes using a conditional diffu- sion model trained on 2D high-resolution images. chine learning-based approaches for analyzing microscopy data. Recent research has shown that 3-dimensional (3D) volumetric microscopy can improve the diagnostic yield of surgical specimens and diagnostic accuracy of both human pathologists and computational pathology models [14, 53]. Diagnostic histoarchitectural and cytologic structures are three-dimensional, such as chromatin structure, microvilli, and perivascular rosette formation [2]. The major barri- ers to integrating 3D volumetric microscopy into clinical medicine and biomedical research are (1) long imaging times, (2) poor depth (z-plane) resolution, and (3) insuffi- cient high-quality 3D volumetric data. Importantly, high- quality, high-resolution, open-source 2D microscopy data is abundantly available, such as images from The Cancer 1 arXiv:2404.09425v1 [eess.IV] 15 Apr 2024 Genome Atlas (TCGA), The Digital Brain Tumor Atlas (DBTA) [42], and OpenSRH [22]. Here, we explore the open computer vision question of how to use high-resolution 2D microscopy data alone to improve the resolution of 3D volumetric microscopy, es- pecially in the low-resolution z-plane or depth axis. We introduce masked slice diffusion for super-resolution (MS- DSR), which leverages the observation that all three spatial dimensions share the same underlying data-generating dis- tribution for biological specimens. For example, biological specimens sampled at the time of surgery for cancer diagno- sis lack a spatial orientation and, therefore, microscopy im- ages obtained in any imaging plane are valid images for can- cer diagnoses. The lack of orientation (e.g. up-down, left- right, front-back) allows super-resolution models trained in any given 2D plane, such as XY, to generalize to any other, such as XZ or YZ. We evaluate MSDSR using a label-free optical imaging modality that is used for biological specimen analysis and intraoperative diagnosis, called stimulated Raman histology (SRH) [12]. SRH is ideally suited for volumetric super- resolution because high-resolution 2D images are readily obtained, but z-sections through the depth of the specimen are slow and costly to obtain. Our major contributions are: 1. We introduce MSDSR, a conditional diffusion-based, 3D volumetric super-resolution method that only requires 2D supervision. 2. We introduce a new volumetric, unpaired, perceptual quality metric, SliceFID. 3. MSDSR outperforms both interpolation and UNet base- lines on image quality metrics, including SliceFID.", + "main_content": "2.1. Denoising diffusion models Generating high-fidelity, high-resolution images is a challenging task in computer vision. Generative models have recently gained popularity and media attention for natural image generation given a prompt or condition [41, 45]. In particular, Denoising Diffusion Probabilistic Models (DDPMs) [18] have shown state-of-the-art results on image synthesis [8], using a UNet architecture to iteratively transform random noise into the learned data distribution. However, these models suffer from heavy computational requirements compared to earlier methods such as variational autoencoders (VAEs) [26] and generative adversarial networks (GANs) [15] due to the iterative sampling process. Denoising diffusion implicit models (DDIMs) [49] accelerate the sampling process of DDPM using nonMarkovian processes. Latent diffusion models (LDMs) [43] further improve image quality and reduce computational requirements by performing the diffusion process on a latent space with lower dimensionality. 2.2. Image super-resolution Super-resolution is the process of increasing the pixel resolution of an image. There exist several non-parametric methods for image super-resolution, such as nearest neighbor, linear, bilinear, and bicubic interpolation [48]. Regression-based methods such as SRCNN [10], LIIF [4], EDSR [31] and SwinIR [30] directly learn mappings from low-resolution to high-resolution images with a pixelwise loss. Generation-based methods are trained to generate a new image based on the input low-resolution image, such as SRGAN [28]; CycleGAN [58], an image-to-image translation method, can be used to convert low-resolution images to high-resolution with unpaired data [25]. Image super-resolution with diffusion models is typically achieved with conditional diffusion, where the original image is used as a condition during the reverse diffusion process [44]. Stable Diffusion [43] enabled efficient high-resolution image super-resolution by combining latent diffusion and conditional diffusion. Recent work combines conditional diffusion with GANs for better quality and faster inference speed [56]. Aside from conditional diffusion, there are also diffusion-based super-resolution methods that either incorporate the input image in the denoising objective, such as DDRM [24], or directly perform iterative reverse diffusion on the low-resolution image, such as IDM [13], and ResShift [57]. 2.3. Super-resolution for biomedical imaging In radiological imaging, acquiring low-resolution images has the advantage of decreasing imaging time, radiation exposure, and motion artifacts. Super-resolution can then be used to interpolate information corrupted or lost during image acquisition. The majority of previous work on biomedical super-resolution has focused on radiological images, such as computed tomography (CT) or magnetic resonance imaging (MRI). Classical interpolation and reconstruction methods for medical images based on image processing methods and modeling the acquisition process have been studied [16]. More recently, deep learning-based methods have gained prominence, including UNet [9, 37], autoencoder [46, 47], and GAN frameworks [1, 17, 32, 34, 35, 59]. Progress has been made in enhancing 3D medical images, such as within radiology. Due to high computational complexity, many existing works combine 2D methods with some modifications for 3D consistency. Spatially aware interpolation network (SAINT) [40] utilized a 2D convolutional network for slice interpolation and a residual fusion network to ensure 3D consistency. Sood et al. [50] used a GAN to generate novel fields-of-view using neighboring slices as conditioning, resulting in through-plane superresolution and a more detailed 3D volume. Kudo et al. [27] applied a conditional GAN to enhance the generative diver2 A. 2D masked slice diffusion model training B. 3D volume super-resolution Random masking ratio 1/2 1/8 1/4 Row-wise masked conditioning Diffusion restoration Concat Slicing Evenly spaced mask condition XZ slice restoration Average Average Slicing YZ Slice restoration Low resolution z-stack Restored isotropic z-stack Forward diffusion process Reverse diffusion process Figure 2. MSDSR overview. A. MSDSR is trained with a diffusion network conditioned on row-wise masks of the ground-truth highresolution image. During the reverse diffusion process, a random masking ratio from 1/2 to 1/8 introduces these rows at random locations to give contextual structure when de-noising. The model then learns to interpolate the noised data in between the mask to produce a high-fidelity 2D image. B. During 3D inference, the low-resolution z-stack volume is sliced in both the XZ and YZ dimensions, producing low-resolution 2D images. The rows of these images are then treated as an evenly spaced mask interlacing random noise when individually fed into the model. These mixtures are then up-scaled by the model to produce high-resolution volumes and are then averaged together to generate a restored isotropic z-stack. sity by incorporating the image information. Xia et al. [55] combined optical flow interpolation with a GAN to generate an auxiliary image as supervision to guide image synthesis. Finally, ArSSR [54] allowed arbitrary scale superresolution of 3D MRIs via implicit neural representation. Most recently, diffusion models have demonstrated remarkable effectiveness, and there have been a few studies to use diffusion models for 3D medical image super-resolution [3, 5\u20137, 29, 39, 52]. [7, 29, 39] are particularly relevant to our work, as they all attempted to use 2D diffusion models for 3D super-resolution. [7, 29] performed super-resolution on 3D MRI/CT by performing 2D super-resolution on perpendicular slices, but still require ground truth 3D highresolution images to supervise model training. DiffuseIR [39] trained a diffusion model for super-resolution using 2D slices, but did not perform 3D reconstruction of the entire volume. Furthermore, the previous works focused on single-channel MRI and electron microscopy data, whereas SRH generates multi-channel images. 2.4. Deep Learning applications in SRH There have been many existing works on applying deep learning methods to SRH images, most of them focusing on classification tasks such as brain tumor subtype classification [20, 38], molecular classification of diffuse glioma types [19], or whole-slide classification [21, 23]. Some prior works applied generative methods to denoising 2D SRH images [33, 36]. Lyu et al. [33] applied diffusionbased image restoration to 3D SRH z-stacks, but only to denoise each XY slice independently. To the best of our knowledge, this work is the first that attempts to superresolve entire 3D SRH z-stack volumes. 3. Methods The key motivation behind MSDSR is that different spatial dimensions of biomedical microscopy are equivalent and all views of 3D structures are orientation invariant. Thus, different slices of an isotropic 3D microscopy volume are 2D slices from the same underlying data-generating distribution. MSDSR leverages this observation and models the distribution using 2D images. The overall model architecture consists of the masked slice diffusion model and volume super-resolution inference, as described in figure 2. Given a low-resolution volume XL \u2208Rn\u00d7n\u00d7\u2113, where \u2113< n, we want to predict an isotropic high-resolution volume XH \u2208Rn\u00d7n\u00d7n. Without high-resolution volume supervision, it is challenging to model p(XH) directly. We approximate the conditional probability of high-resolution volumes by modeling slices independently: p \u0000XH\f \f XL\u0001 \u2248 n Y i=1 p \u0010 XH [i,:,:] \f \f \f XL [i,:,:] \u0011 ; (1) p \u0000XH\f \f XL\u0001 \u2248 n Y j=1 p \u0010 XH [:,j,:] \f \f \f XL [:,j,:] \u0011 , (2) 3 where XL [i,:,:], XL [:,j,:] \u2208 R\u2113\u00d7n are YZ and XZ cross sections of the low-resolution volume XL, respectively; and XH [i,:,:], XH [:,j,:] \u2208Rn\u00d7n are high-resolution YZ, XZ slices of XH, respectively. Here, p(XH [i,:,:]|XL [i,:,:]) and p(XH [:,j,:]|XL [:,j,:]) are the conditional probability of the highresolution YZ and XZ slices, given low-resolution YZ and XZ images with less rows in the Z dimension. Since XY, XZ, and YZ are from the same underlying distribution, it is equivalent to maximize the likelihood p(xH|xL), where xH \u2208Rn\u00d7n is a high-resolution 2D image, and xL \u2208R\u2113\u00d7n is a lower resolution observation that can be simulated by downsampling or masking at training time. Since highresolution 3D data is challenging and expensive to acquire, training with 2D images allows us to learn a better model by leveraging more data. 3.1. Masked slice diffusion We train a DDPM to generate a high-resolution 2D image xH, conditioned on the paired low-resolution image xL. During training, we simulate the paired low-resolution image by removing rows of the high-resolution image. We obtain high-resolution images from the XY plane, and the trained model still applies well to XZ and YZ superresolution due to the dimensional equivalence of SRH microscopy. Following the key results in [18], forward diffusion is a fixed process that gradually adds Gaussian noise to the image following a noise schedule \u03b2, for a total of T steps. At each step t, xH t \u223cN \u0000\u221a\u00af \u03b1txH, (1 \u2212\u00af \u03b1t) I \u0001 , (3) where \u03b1t = 1 \u2212\u03b2t, and \u00af \u03b1t = Qt s=1 \u03b1s. During the reverse diffusion process, we condition xH t by interlacing it with the simulated low-resolution image. We sample \u2113\u223cUniform([\u2113min, \u2113max]) number of rows to include as the condition, where \u2113min,\u2113max are hyperparameters such that 0 < \u2113min < \u2113max < n. S \u223cUniform([1, n], \u2113) is a set of random \u2113indices drawn without replacement for each row to be interlaced into xH t . We create a row-wise binary mask b to combine the partially denoised image at timestep t and the low-resolution image condition: b = [1S(1), . . . , 1S(n)]\u22a4 (4) c(xH t , xH, b) = b \u2217xH + (1 \u2212b) \u2217xH t , (5) where 1(\u00b7) is the indicator function, and \u2217denotes elementwise multiplication with broadcasting. The masked slice diffusion model \u03f5\u03b8 in the reverse diffusion process is optimized using the variational lower bound on the negative log-likelihood with the objective function (i.e. the simplified objective from [18]): L = ExH,b,\u03f5\u223cN (0,I),t \u0002\r \rb \u2217\u03f5 \u2212\u03f5\u03b8 \u0000c(xH t , xH, b), t \u0001\r \r\u0003 . (6) 3.2. Volume super-resolution inference To generate high-resolution volumes, we use our masked slice diffusion model to infer high-quality YZ and XZ slices along the X and Y axes: \u02c6 XH Y Z = Concat h f \u0010 XL [1,:,:] \u0011 , . . . , f \u0010 XL [n,:,:] \u0011i ; (7) \u02c6 XH XZ = Concat h f \u0010 XL [:,1,:] \u0011 , . . . , f \u0010 XL [:,n,:] \u0011i , (8) where f is the full reverse diffusion restoration process of our masked slice diffusion model, including sampling xT , denoising and interlacing the observed low-resolution image at each time step. During the restoration process, each of the m slices along an axis is super-resolved independently. This independence does not reflect the physical structure we are trying to render, as neighboring slices are correlated. As a result, concatenation artifacts may form along the slices orthogonal to the inference axes. To eliminate the independence of inferences between planes, we use averaging to combine the volumes super-resolved in both directions: \u02c6 XH = \u02c6 XH Y Z + \u02c6 XH XZ 2 . (9) As shown in section 5, this straightforward method achieves good empirical results in reducing inconsistencies and concatenation artifacts on the super-resolved volumes. A pseudocode of the MSDSR inference process is in algorithm 1. Algorithm 1 MSDSR volume inference in PyTorch style. def superresolve_along_axis(x): # x: transposed low res image of shape [n 3 l n] high_res_ims = [] mask = arange(0, n, n // l) # assume l is a factor of n for i in range(n): # for each low res slice # interlace random noise with the observation x_T = randn_like(x[i]) x_T[:, mask, :] = x[i] # full reverse diffusion restoration process high_res_ims.append(msdsr.restore(x_T)) return stack(high_res_ims) def superresolve_volume(xl): # xl: low res image of shape [3 n n l] (CHWZ) # transpose and superresolve xl in XZ slices xl_xz = rearrange(xl, \"c h w z -> h c z w\") xh_xz = superresolve_along_axis(xl_xz) xh_xz = rearrange(xh_xz, \"h c z w -> c h w z\") # transpose and superresolve xl in YZ slices xl_yz = rearrange(xl, \"c h w z -> w c z h\") xh_yz = superresolve_along_axis(xl_yz) xh_yz = rearrange(xh_yz, \"w c z h -> c h w z\") # return high resolution volume return (xh_yz + xh_xz) / 2 4 4. Experimentation We evaluated MSDSR on a z-stacked volumetric stimulated Raman histology (SRH) dataset and compared the results to interpolation and UNet baselines. 4.1. Data description Our z-stacked SRH dataset was collected using tumor specimens from patients who underwent brain tumor biopsy or resection at the University of Michigan. This study was approved by the Institutional Review Board (HUM00083059), and informed consent was obtained from each patient before imaging. The z-stacked SRH imaging of fresh surgical specimens follows the imaging protocol described in [22]. Each slide has a 0.5\u00d70.5 mm2 field of view and a 1000\u00d71000 pixel resolution. The slides are imaged at an initial depth of 20 \u00b5m, and the laser focus is adjusted for each subsequent slice for virtual sectioning. Each z-stacked volume has a z-resolution of 1 \u00b5m for 20 z-sections. These z-stacks are not isotropic due to the physical limitations of optical sectioning, imaging time, and cost. Our data ingest and image processing pipeline also follows [22], with each whole-slide image volume being patched into 256\u00d7256\u00d720 pixels3 tiles. Every patch is also subsequently denoised using RSCD [33] before model development and validation. Our z-stacked dataset consists of 1129 whole-slide images from a total of 300 patients, spanning a wide range of brain tumor types, including glioma, meningioma, metastases, pituitary adenoma, schwannoma, and other less common central nervous system tumors. The dataset is split into training and validation sets, with 241 patients for model training and 59 patients for validation. Additionally, a key advantage of MSDSR is that it does not require training data to be volumetric. As a result, we also utilized a larger 2D SRH dataset consisting of SRH images from 1021 patients for masked slice training. 4.2. Implementation details MSDSR architecture and training. Our MSDSR model is implemented using DDPM with 274M parameters. Based on the depth field of view of our SRH images, we crop our input images to 48 \u00d7 48. Our DDPM model utilizes a cosine noise scheduler with T = 1000 steps. The model is supervised with an L1 loss and optimized using AdamW, with a base learning rate of 10\u22127 and a cosine learn rate scheduler with a 10% warmup. The model was trained until convergence with an effective batch size of 256. At training time, we condition the reverse diffusion process by randomly masking rows of high-resolution images, as described in section 3.1. The number of rows to be used as the condition is randomly drawn from a uniform distribution, with \u2113min = 5 and \u2113max = 20. These parameters were selected based on our z-stacked SRH dataset: \u2113max = 20 matches the resolution of existing data (at half resolution relative to XY slices); and \u2113min = 5 matches data collected with 4\u00d7 speed up, resulting in a z-resolution that is 1/8 of Xand Y-resolution. MS-UNet baseline. We applied our masked slice training approach to a UNet architecture (MS-UNet). We use the same strategy to train the model using high-resolution images interlaced with random noise rows as input. An L1 loss between the prediction and the ground truth high-resolution 2D image was used to supervise model training. The UNet model was trained with a base learning rate of 10\u22123, with other hyperparameters kept the same as MSDSR training. End-to-end (E2E) UNet baseline. In addition to masked slice training, we also trained an end-to-end UNet to interpolate two different slides. The end-to-end UNet has the same architecture as the MS-UNet, except it takes a sixchannel input consisting of two RGB slices Xi and Xi+2. The model outputs the slice Xi+1 between the two input slices and is supervised with an L1 loss function. All hyperparameters are the same as MS-UNet training. 4.3. Evaluation protocol Paired 2D evaluation. We evaluate the model by comparing super-resolved images with their high-resolution ground-truths. We uniformly mask 24, 36, and 42 rows from high-resolution 2D images for 2\u00d7, 4\u00d7, and 8\u00d7 superresolution tasks, respectively. We report FID and SSIM as quantitative metrics. Unpaired 3D evaluation. We evaluate 3D superresolution using the z-stacked SRH images as described in section 4.1. It is challenging to apply FID, an unpaired metric, to volumetric SRH data because it requires a learned embedding space to measure the similarity of the data distribution. Prior work, such as 3D-FID [11] and FVD [51], use specialized volumetric feature extractors to compute embeddings, which is not feasible for our z-stacked SRH dataset. Thus, we propose an evaluation metric, SliceFID, to measure the quality of restored z-stack microscopy data. SliceFID is motivated by the domain knowledge that XY, YZ, and XZ slices of an isotropic 3D volume are different views of the same underlying biological structures and follow the same data distribution. Therefore, we can utilize the FID to compute the distance between a set of high-quality two-dimensional images and each of the XY, YZ, and XZ slices of the generated image. The FID score in each axis informs the perceptual quality of images along each axis, and 5 2\u00d7 super-resolution 4\u00d7 super-resolution 8\u00d7 super-resolution FID SSIM FID SSIM FID SSIM NN 90.0 0.714 244.8 0.426 450.4 0.228 Bilinear 31.6 0.727 134.0 0.442 314.6 0.241 MS-UNet 24.9 0.825 66.8 0.628 163.4 0.419 MSDSR (Ours) 21.0 0.678 21.5 0.486 22.3 0.284 Table 1. Paired 2D evaluation metrics. We present the FID and SSIM scores of our models and baselines on 2D paired data with a scaling factor ranging from 2\u00d7 to 8\u00d7. These comparisons come from inference on high-resolution 2D images. While MS-UNet achieves a higher SSIM, images super-resolved by MS-UNet are perceptually blurry. NN, nearest neighbor, bilinear, bilinear interpolation. SliceFID is defined as the average of these per-axis metrics: SliceFID(xH, \u02c6 X) = 1 3 h FID(xH, sliceXY( \u02c6 X))+ FID(xH, sliceXZ( \u02c6 X)) + FID(xH, sliceYZ( \u02c6 X)) i , (10) where xH \u2208Rk\u00d7h\u00d7w is a set of k high-quality ground truth 2D images, \u02c6 X \u2208Rm\u00d7h\u00d7w\u00d7z is a set of m super-resolved volumes, and sliceXY( \u02c6 X) \u2208Rmz\u00d7h\u00d7w, sliceXZ( \u02c6 X) \u2208 Rmh\u00d7w\u00d7z and sliceYZ( \u02c6 X) \u2208Rmw\u00d7h\u00d7z are the generated volumes in XY, XZ, and YZ slices, respectively. 5. Results 5.1. MSDSR paired 2D evaluation In this section, we evaluate MSDSR on a paired 2D super-resolution task and compare it to interpolation and UNet baselines. Table 1 summarizes the quantitative metrics, and a panel of super-resolved examples with various numbers of conditioning rows is shown in figure 3. Quantitatively, MSDSR outperforms all baseline methods on FID across all super-resolution tasks. MS-UNet achieves the best SSIM metric across all tasks but produces perceptually blurry images. MSDSR outperforms nearest neighbor and bilinear interpolation baselines in SSIM with a larger super-resolution scaling factor (4\u00d7 and 8\u00d7). Visually, MSDSR generates high-fidelity images similar to the paired ground truth, where NN and bilinear interpolation generate images with significant blurring and artifacts. MS-UNet produces overly smooth images, with missing details in cellular structures and background objects. MSDSR consistently recovers relevant cellular features (e.g., shape, chromatin, cytoplasm) with increasingly lower-resolution input images while maintaining realistic details, making it the only robust super-resolution method benchmarked. 5.2. MSDSR 3D Inference Evaluation To evaluate the quality of generated z-stack volumes, we use SliceFID to assess the volumetric image quality along each of the XY, YZ, and XZ planes, as well as their average for a holistic evaluation. Table 2 shows SliceFID NN Ground Truth MSDSR (Ours) Bilinear MS-UNet 24/48 12/48 6/48 # Cond Rows 24/48 12/48 6/48 Figure 3. Paired 2D evaluation. We compare the images generated by MSDSR and other baselines to the paired ground truth image. # cond rows, number of conditioning rows, NN, nearest neighbor, bilinear, bilinear interpolation. and its components, and figure 4 shows a sample superresolved volume for each model, across three different super-resolution scaling factors. Overall, MSDSR achieves the best SliceFID score across all super-resolution tasks. Non-parametric interpolation methods (i.e., NN and bilinear) have a low FID score along the XY plane because they naively interpolate information between the XY slices in the z-direction, leaving the input data intact. As a result, YZ and XZ images generated by these methods are unrealistic and contain jittering and stretching artifacts due to the sparsity of the input data on the plane. Both UNet-based models achieved better perfor6 2\u00d7 super-resolution FID 4x super-resolution FID 8\u00d7 super-resolution FID XY YZ XZ SliceFID XY YZ XZ SliceFID XY YZ XZ SliceFID NN 24.7 197.9 166.2 129.6 24.8 357.7 354.6 245.7 25.1 530.8 447.2 334.4 Bilinear 30.6 62.0 60.6 51.1 29.3 179.1 162.7 123.7 27.3 355.0 357.3 246.5 E2E UNet 43.9 73.0 63.4 60.1 58.7 179.8 167.9 135.4 70.9 311.9 298.7 227.2 MS-UNet 38.8 46.2 55.0 46.6 58.0 98.9 95.8 84.2 103.9 224.4 189.6 172.7 MSDSR (Ours) 16.2 28.9 31.4 25.5 25.5 37.1 35.8 32.8 107.4 61.9 56.7 75.3 Table 2. 3D super-resolution metrics. We present the SliceFID score and its components for MSDSR and MS-UNet along with our baseline methods. NN, nearest neighbor, bilinear, bilinear interpolation, E2E UNet, end-to-end UNet. NN MSDSR (Ours) Bilinear MS-UNet E2E UNet 2x 4x 8x Figure 4. 3D super-resolution results. We compare 3D volumetric super-resolution inference across three different input scalings. NN, nearest neighbor, bilinear, bilinear interpolation, E2E UNet, end-to-end UNet. mance compared to the non-parametric methods, but they both generate overly smooth images, lacking detail in the nuclei and cytoplasm of the cells. The naive E2E UNet had worse results because it only models images in the XY plane and is only trained using a fixed 2-micron interval. When inferencing with a lower resolution condition, the E2E model relies on a recursive strategy to predict intermediate slices and amplifies any mistakes made during the process. While MS-UNet performs better, it still generates smoothed images that are not realistic. In comparison, MSDSR performs the best and generates high-fidelity volumes, with cellular and background textures remaining available. As condition image resolution decreases, MSDSR still maintains consistent and reasonable images, albeit at a mildly worse quality. Ablation studies. We investigate the effect of averaging inferences from both XZ and YZ directions, as well as using Gaussian blur as a post-processing step. Quantitative Average + blur Average (Ours) YZ inference 2x XZ inference NN 4x 8x Figure 5. Ablation study on inference direction and Gaussian blur. We compare 3D volumetric super-resolution inference using ablated models across three different input scalings. NN, nearest neighbor, average, averaging XZ and YZ inference. SliceFID metrics are reported in tables 3 and 4, respectively. Examples of super-resolved volumes are shown in figure 5. Inferencing along a single direction results in noticeable stitching artifacts for 4\u00d7 and 8\u00d7 super-resolved images, especially in the planes orthogonal to the inference plane, both visually and quantitatively. This results from aggregating independent inferences that are plausible solutions during individual slice inference, given a lower resolution condition. Averaging XZ and YZ alleviates the artifact, but does not completely remove it. Applying Gaussian blur is another way to reduce the stitching artifact, especially in 8\u00d7 super-resolution task, but it also reduces the overall sharpness of the prediction. 6. Conclusion Our study presents a novel approach, masked slice diffusion for super-resolution (MSDSR), to enhance the resolution of 3D volumetric biomedical images utilizing 2D supervision. We demonstrated that MSDSR can leverage the 7 2\u00d7 super-resolution FID 4\u00d7 super-resolution FID 8\u00d7 super-resolution FID XY YZ XZ SliceFID XY YZ XZ SliceFID XY YZ XZ SliceFID MSDSR-XZ 14.5 28.1 27.9 23.5 23.5 38.8 28.1 30.1 88.5 103.9 25.6 72.7 MSDSR-YZ 15.1 27.1 28.0 23.4 26.6 28.6 36.3 30.5 102.8 27.3 100.7 76.9 Table 3. 3D ablation FID metrics. We present the SliceFID score and its components for MSDSR ablated for inference in only a single dimension. MSDSR-XZ represents performing inference solely in the XZ dimension, with the analogous MSDSR-YZ. We can observe a significant performance drop on the planes orthogonal to the inference plane, most likely due to a stitching artifact. SR scale Method XY (\u2206) YZ (\u2206) XZ (\u2206) SliceFID (\u2206) MSDSR-XZ + blur 23.5 (+9.0) 34.3 (+6.2) 37.9 (+10.0) 31.9 (+8.4) 2\u00d7 MSDSR-YZ + blur 24.0 (+9.0) 38.5 (+11.4) 37.3 (+9.4) 33.3 (+9.9) MSDSR + blur 30.7 (+14.5) 44.2 (+15.3) 45.4 (+14.0) 40.1 (+14.6) MSDSR-XZ + blur 26.3 (+2.8) 39.1 (+0.3) 34.7 (+6.7) 33.4 (+3.3) 4\u00d7 MSDSR-YZ + blur 28.5 (+1.9) 36.1 (+7.5) 41.0 (+4.7) 35.2 (+4.7) MSDSR + blur 32.7 (+7.2) 43.5 (+6.3) 45.0 (+9.2) 40.4 (+7.6) MSDSR-XZ + blur 72.5 (-16.0) 88.5 (-15.3) 30.2 (+4.6) 63.7 (-8.9) 8\u00d7 MSDSR-YZ + blur 84.7 (-18.1) 30.9 (+3.5) 79.7 (-21.0) 65.1 (-11.9) MSDSR + blur 101.2 (-6.2) 62.1 (+0.2) 59.5 (+2.8) 74.3 (-1.1) Table 4. 3D Gaussian blur ablation FID metrics. 3 \u00d7 3 \u00d7 3 Gaussian blur post-processing degrades the model performance for 2\u00d7 and 4\u00d7 super-resolution, but offers a boost for the 8\u00d7 task, especially in the planes orthogonal to the inference plane. Differences to FID scores before Gaussian blurring are reported in parentheses (\u2206). SR scale, super-resolution scale. inherent similarity in data-generating distributions across spatial dimensions of biological specimens, enabling effective generalization from 2D to 3D. Our proposed method significantly surpasses traditional interpolation and UNet baselines across various image quality metrics, notably through our newly introduced SliceFID metric, emphasizing MSDSR\u2019s efficacy in generating high-quality, realistic volumetric reconstructions from low-resolution inputs. Limitations. While MSDSR has shown promising results, it is not without limitations. The primary challenge lies in the method\u2019s current reliance on synthesizing slices independently, which can lead to inconsistencies in 3D volumetric reconstructions. This approach, while effective for enhancing individual slices, does not fully exploit the spatial correlations inherent in 3D structures, potentially affecting the consistency of the reconstructed volumes. Furthermore, the computational demands of our method, particularly for high-resolution volumetric data, pose challenges for real-time clinical applications. Broader impact. The broader impact of MSDSR extends beyond the technical achievements in biomedical imaging. By significantly improving the resolution and quality of 3D volumetric microscopy, our work has the potential to advance diagnostic accuracy, enhance the understanding of complex biological structures, and facilitate the development of novel therapeutic strategies. Furthermore, by reducing the dependency on high-resolution 3D data, MSDSR can democratize access to advanced imaging technologies, particularly in resource-constrained settings, ultimately contributing to the global efforts to bridge the healthcare divide. Acknowledgements and Competing Interests We would like to thank Karen Eddy, Lin Wang, and Hubert Zhang for their administrative support and data collection efforts. This work was supported, in part, by the National Institutes of Health (NIH) grants F31NS135973 (C.J.), T32GM141746 (C.J.), and K12NS080223 (T.H.). This work was also supported, in part, by the Chan Zuckerberg Foundation (CZI) Advancing Imaging Through Collaborative Project grant (T.H.), the Cook Family Brain Tumor Research Fund (T.H.), the Mark Trauner Brain Research Fund (T.H.), the Zenkel Family Foundation (T.H.), Ian\u2019s Friends Foundation (T.H.) and the UM Precision Health Investigators Awards grant program (T.H.). T.H. is a shareholder of Invenio Imaging, Inc., a company developing SRH microscopy systems. 8" + }, + { + "url": "http://arxiv.org/abs/2404.14700v3", + "title": "FlashSpeech: Efficient Zero-Shot Speech Synthesis", + "abstract": "Recent progress in large-scale zero-shot speech synthesis has been\nsignificantly advanced by language models and diffusion models. However, the\ngeneration process of both methods is slow and computationally intensive.\nEfficient speech synthesis using a lower computing budget to achieve quality on\npar with previous work remains a significant challenge. In this paper, we\npresent FlashSpeech, a large-scale zero-shot speech synthesis system with\napproximately 5\\% of the inference time compared with previous work.\nFlashSpeech is built on the latent consistency model and applies a novel\nadversarial consistency training approach that can train from scratch without\nthe need for a pre-trained diffusion model as the teacher. Furthermore, a new\nprosody generator module enhances the diversity of prosody, making the rhythm\nof the speech sound more natural. The generation processes of FlashSpeech can\nbe achieved efficiently with one or two sampling steps while maintaining high\naudio quality and high similarity to the audio prompt for zero-shot speech\ngeneration. Our experimental results demonstrate the superior performance of\nFlashSpeech. Notably, FlashSpeech can be about 20 times faster than other\nzero-shot speech synthesis systems while maintaining comparable performance in\nterms of voice quality and similarity. Furthermore, FlashSpeech demonstrates\nits versatility by efficiently performing tasks like voice conversion, speech\nediting, and diverse speech sampling. Audio samples can be found in\nhttps://flashspeech.github.io/.", + "authors": "Zhen Ye, Zeqian Ju, Haohe Liu, Xu Tan, Jianyi Chen, Yiwen Lu, Peiwen Sun, Jiahao Pan, Weizhen Bian, Shulin He, Qifeng Liu, Yike Guo, Wei Xue", + "published": "2024-04-23", + "updated": "2024-04-25", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.AI", + "cs.CL", + "cs.LG", + "cs.SD" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "In recent years, the landscape of speech synthesis has been transformed by the advent of large-scale generative models. Consequently, the latest research efforts have achieved notable advancements in zero-shot speech synthesis systems by significantly increasing the size of both datasets and models. Zero-shot speech synthesis, such as text-to-speech (TTS), voice conversion (VC) and Editing, aims to generate speech that incorporates unseen speaker characteristics from a reference audio segment during inference, without the need for additional training. Current advanced zero-shot speech synthesis systems typically leverage language models (LMs) Wang et al. (2023a); Yang et al. (2023); Zhang et al. (2023); Kharitonov et al. (2023); Wang et al. (2023b); Peng et al. (2024); Kim et al. (2024) and diffusion-style models Shen et al. (2024); Kim et al. (2023b); Le et al. (2023); Jiang et al. (2023b) for in-context speech generation on the large-scale dataset. However, the generation process of these methods needs a long-time iteration. For example, VALL-E Wang et al. (2023a) builds on the language model to predict 75 audio token sequences for a 1-second speech, in its first-stage autoregressive (AR) token sequence generation. When using a non-autoregressive (NAR) latent diffusion model Rombach et al. (2022) based framework, NaturalSpeech 2 Shen et al. (2024) still requires 150 sampling steps. As a result, although these methods can produce human-like speech, they require significant computational time and cost. Some efforts have been made to accelerate the Preprint. Under review. \u2020: Corresponding authors. arXiv:2404.14700v3 [eess.AS] 25 Apr 2024 Figure 1: The inference time comparisons of different zero-shot speech synthesis systems using the real-time factor (RTF). generation process. Voicebox Le et al. (2023) adopts flow-matching Lipman et al. (2022) so that fewer sampling steps (NFE1: 64) can be achieved because of the optimal transport path. ClaM-TTS Kim et al. (2024) proposes a mel-codec with a superior compression rate and a latent language model that generates a stack of tokens at once. Although the slow generation speed issue has been somewhat alleviated, the inference speed is still far from satisfactory for practical applications. Moreover, the substantial computational time of these approaches leads to significant computational cost overheads, presenting another challenge. The fundamental limitation of speech generation stems from the intrinsic mechanisms of language models and diffusion models, which require considerable time either auto-regressively or through a large number of denoising steps. Hence, the primary objective of this work is to accelerate inference speed and reduce computational costs while preserving generation quality at levels comparable to the prior research. In this paper, we propose FlashSpeech as the next step towards efficient zero- shot speech synthesis. To address the challenge of slow generation speed, we leverage the latent consistency model (LCM) Luo et al. (2023), a recent advancement in generative models. Building upon the previous non-autoregressive TTS system Shen et al. (2024), we adopt the encoder of a neural audio codec to convert speech waveforms into latent vectors as the training target for our LCM. To train this model, we propose a novel technique called adversarial consistency training, which utilizes the capabilities of pre-trained speech language models Chen et al. (2022b); Hsu et al. (2021); Baevski et al. (2020) as discriminators. This facilitates the transfer of knowledge from large pre-trained speech language models to speech generation tasks, efficiently integrating adversarial and consistency training to improve performance. The LCM is conditioned on prior vectors obtained from a phoneme encoder, a prompt encoder, and a prosody generator. Furthermore, we demonstrate that our proposed prosody generator leads to more diverse expressions and prosody while preserving stability. Our contributions can be summarized as follows: \u2022 We propose FlashSpeech, an efficient zero-shot speech synthesis system that generates voice with high audio quality and speaker similarity in zero-shot scenarios. \u2022 We introduce adversarial consistency training, a novel combination of consistency and adversarial training leveraging pre-trained speech language models, for training the latent consistency model from scratch, achieving speech generation in one or two steps. 1NFE: number of function evaluations. 2 \u2022 We propose a prosody generator module that enhances the diversity of prosody while maintaining stability. \u2022 FlashSpeech significantly outperforms strong baselines in audio quality and matches them in speaker similarity. Remarkably, it achieves this at a speed approximately 20 times faster than comparable systems, demonstrating unprecedented efficiency.", + "main_content": "2.1 Large-Scale Speech Synthesis Motivated by the success of the large language model, the speech research community has recently shown increasing interest in scaling the sizes of model and training data to bolster generalization capabilities, producing natural speech with diverse speaker identities and prosody under zero-shot settings. The pioneering work is VALL-E Wang et al. (2023a), which adopts the Encodec D\u00e9fossez et al. (2022) to discretize the audio waveform into tokens. Therefore, a language model can be trained via in-context learning that can generate the target utterance where the style is consistent with prompt utterance. However, generating audio in such an autoregressive manner Wang et al. (2023b); Peng et al. (2024)can lead to unstable prosody, word skipping, and repeating issues Ren et al. (2020); Tan et al. (2021); Shen et al. (2024). To ensure the robustness of the system, non-autoregressive methods such as NaturalSpeech2 Shen et al. (2024) and Voicebox Le et al. (2023) utilize diffusion-style model (VP-diffusion Song et al. (2020) or flow-matching Lipman et al. (2022)) to learn the distribution of a continuous intermediate vector such as mel-spectrogram or latent vector of codec. Both LM-based methods Zhao et al. (2023) and diffusion-based methods show superior performance in speech generation tasks. However, their generation is slow due to the iterative computation. Considering that many speech generation scenarios require real-time inference and low computational costs, we employ the latent consistency model for large-scale speech generation that inference with one or two steps while maintaining high audio quality. 2.2 Acceleration of Speech Synthesis Since early neural speech generation models Tan et al. (2021) use autoregressive models such as Tacotron Wang et al. (2017) and TransformerTTS Li et al. (2019), causing slow inference speed, with O(N) computation, where N is the sequence length. To address the slow inference speed, FastSpeech Ren et al. (2020, 2019) proposes to generate a mel-spectrogram in a non-autoregressive manner. However, these models Ren et al. (2022) result in blurred and over-smoothed mel-spectrograms due to the regression loss they used and the capability of modeling methods. To further enhance the speech quality, diffusion models are utilized Popov et al. (2021a); Jeong et al. (2021); Popov et al. (2021b) which increase the computation to O(T), where T is the diffusion steps. Therefore, distillation techniques Luo (2023) for diffusion-based methods such as CoMoSpeech Ye et al. (2023), CoMoSVC Lu et al. (2024) and Reflow-TTS Guan et al. (2023) emerge to reduce the sampling steps back to O(1), but require additional pre-trained diffusion as the teacher model. Unlike previous distillation techniques, which require extra training for the diffusion model as a teacher and are limited by its performance, our proposed adversarial consistency training technique can directly train from scratch, significantly reducing training costs. In addition, previous acceleration methods only validate speaker-limited recording-studio datasets with limited data diversity. To the best of our knowledge, FlashSpeech is the first work that reduces the computation of a large-scale speech generation system back to O(1). 2.3 Consistency Model The consistency model is proposed in Song et al. (2023); Song and Dhariwal (2023) to generate high-quality samples by directly mapping noise to data. Furthermore, many variants Kong et al. (2023); Lu et al. (2023); Sauer et al. (2023); Kim et al. (2023a) are proposed to further increase the generation quality of images. The latent consistency model is proposed by Luo et al. (2023) which can directly predict the solution of PF-ODE in latent space. However, the original LCM employs consistency distillation on the pre-trained latent diffusion model (LDM) which leverages large-scale off-the-shelf image diffusion models Rombach et al. (2022). Since there are no pre-trained large-scale TTS models in the speech community, and inspired by the techniques Song and Dhariwal (2023); 3 Kim et al. (2023a); Lu et al. (2023); Sauer et al. (2023); Kong et al. (2023), we propose the novel adversarial consistency training method which can directly train the large-scale latent consistency model from scratch utilizing the large pre-trained speech language model Chen et al. (2022b); Hsu et al. (2021); Baevski et al. (2020) such as WavLM for speech generation. 3 FlashSpeech Codec Encoder Codec Decoder Phoneme Codec Decoder Synthesized Speech Raw Speech Reconstructed Speech Latent Consistency Model Latent Vector Z Conditional Feature Noise Encoder \ud835\udc33\ud835\udc91\ud835\udc93\ud835\udc90\ud835\udc8e\ud835\udc91\ud835\udc95 \ud835\udc33\ud835\udc95\ud835\udc82\ud835\udc93\ud835\udc88\ud835\udc86\ud835\udc95 Random Segment \u0ddc \ud835\udc9b\ud835\udc95\ud835\udc82\ud835\udc93\ud835\udc88\ud835\udc86\ud835\udc95 \ud835\udc33\ud835\udc91\ud835\udc93\ud835\udc90\ud835\udc8e\ud835\udc91\ud835\udc95 Prosody Generator Discriminator Real / Fake Figure 2: Overall architecture of FlashSpeech. Our FlashSpeech consists of a codec encoder/decoder and a latent consistency model conditioned on feature from a phoneme and zprompt encoder and a prosody generator. A discriminator is used during training. 3.1 Overview Our work is dedicated to advancing the speech synthesis efficiency, achieving O(1) computation cost while maintaining comparable performance to prior studies that require O(T) or O(N) computations. The framework of the proposed method, FlashSpeech, is illustrated in Fig. 2. FlashSpeech integrates a neural codec, an encoder for phonemes and prompts, a prosody generator, and an LCM, which are utilized during both the training and inference stages. Exclusively during training, a conditional discriminator is employed. FlashSpeech adopts the in-context learning paradigm Wang et al. (2023a), initially segmenting the latent vector z, extracted from the codec, into ztarget and zprompt. Subsequently, the phoneme and zprompt are processed through the encoder to produce the hidden feature. A prosody generator then predicts pitch and duration based on the hidden feature. The pitch and duration embeddings are combined with the hidden feature and inputted into the LCM as the conditional feature. The LCM model is trained from scratch using adversarial consistency training. After training, FlashSpeech can achieve efficient generation within one or two sampling steps. 3.2 Latent Consistency Model The consistency model Song et al. (2023) is a new family of generative models that enables one-step or few-step generation. Let us denote the data distribution by pdata(x). The core idea of the consistency model is to learn the function that maps any points on a trajectory of the PF-ODE to that trajectory\u2019s origin, which can be formulated as: f(x\u03c3, \u03c3) = x\u03c3min (1) where f(\u00b7, \u00b7) is the consistency function and x\u03c3 represents the data x perturbed by adding zero-mean Gaussian noise with standard deviation \u03c3. \u03c3min is a fixed small positive number. Then x\u03c3min can then be viewed as an approximate sample from the data distribution pdata(x). To satisfy property in equation (1), following Song et al. (2023), we parameterize the consistency model as f\u03b8(x\u03c3, \u03c3) = cskip(\u03c3)x + cout(\u03c3)F\u03b8(x\u03c3, \u03c3) (2) 4 where f\u03b8 is to estimate consistency function f by learning from data, F\u03b8 is a deep neural network with parameter \u03b8, cskip(\u03c3) and cout(\u03c3) are are differentiable functions with cskip(\u03c3min) = 1 and cout(\u03c3min) = 0 to ensure boundary condition. A valid consistency model should satisfy the selfconsistency property Song et al. (2023) f\u03b8(x\u03c3, \u03c3) = f\u03b8(x\u03c3\u2032, \u03c3\u2032), \u2200\u03c3, \u03c3\u2032 \u2208[\u03c3min, \u03c3max]. (3) where \u03c3max = 80 and \u03c3min = 0.002 following Karras et al. (2022); Song et al. (2023); Song and Dhariwal (2023). Then the model can generate samples in one step by evaluating x\u03c3min = f\u03b8(x\u03c3max, \u03c3max) (4) from distribution x\u03c3max \u223cN(0, \u03c32 maxI). As we apply a consistency model on the latent space of audio, we use the latent features z which are extracted prior to the residual quantization layer of the codec, z = CodecEncoder(y) (5) where y is the speech waveform. Furthermore, we add the feature from the prosody generator and encoder as the conditional feature c, our objective has changed to achieve f\u03b8(z\u03c3, \u03c3, c) = f\u03b8(z\u03c3\u2032, \u03c3\u2032, c) \u2200\u03c3, \u03c3\u2032 \u2208[\u03c3min, \u03c3max]. (6) During inference, the synthesized waveform \u02c6 y is transformed from \u02c6 z via the codec decoder. The predicted \u02c6 z is obtained by one sampling step \u02c6 z = f\u03b8(\u03f5 \u2217\u03c3max, \u03c3max) (7) or two sampling steps \u02c6 zinter = f\u03b8(\u03f5 \u2217\u03c3max, \u03c3max) (8) \u02c6 z = f\u03b8(\u02c6 zinter + \u03f5 \u2217\u03c3inter, \u03c3inter) (9) where \u02c6 zinter means the intermediate step, \u03c3inter is set to 2 empirically. \u03f5 is sampled from a standard Gaussian distribution. 3.3 Adversarial Consistency Training A major drawback of the LCM Luo et al. (2023) is that it needs to pre-train a diffusion-based teacher model in the first stage, and then perform distillation to produce the final model. This would make the training process complicated, and the performance would be limited as a result of the distillation. To eliminate the reliance on the teacher model training, in this paper, we propose a novel adversarial consistency training method to train LCM from scratch. Our training procedure is outlined in Fig. 3, which has three parts: 3.3.1 Consistency Training To achieve the property in equation (3), we adopt following consistency loss LN ct(\u03b8, \u03b8\u2212) = E[\u03bb(\u03c3i)d(f\u03b8(zi+1, \u03c3i+1, c), f\u03b8\u2212(zi, \u03c3i, c))]. (10) where \u03c3i represents the noise level at discrete time step i, d(\u00b7, \u00b7) is the distance function, f\u03b8(zi+1, \u03c3i+1, c) and f\u03b8\u2212(zi, \u03c3i, c) are the student with the higher noise level and the teacher with the lower noise level, respectively. The discrete time steps denoted as \u03c3min = \u03c30 < \u03c31 < \u00b7 \u00b7 \u00b7 < \u03c3N = \u03c3max are divided from the time interval [\u03c3min, \u03c3max], where the discretization curriculum N increases correspondingly as the number of training steps grows N(k) = min(s02\u230ak K\u2032 \u230b, s1) + 1 (11) where K\u2032 = j K log2\u230as1/s0\u230b+1 k , k is the current training step and K is the total training steps. s1 and s0 are hyperparameters to control the size of N(k). The distance function d(\u00b7, \u00b7) uses the Pseudo-Huber metric Charbonnier et al. (1997) d(x, y) = p \u2225x \u2212y\u22252 + a2 \u2212a, (12) 5 Denoiser Denoiser \uf071\u2212 \uf071 Student Teacher Consistency Loss Discriminator Adversarial Loss Stop Grad \\\\ \ud835\udc33\ud835\udf0e\ud835\udc56+1 \ud835\udc33\ud835\udf0e\ud835\udc56 \ud835\udc33 Codec Decoder waveform \ud835\udc53 \ud835\udf03(\ud835\udc33\ud835\udf0e\ud835\udc56+1, \ud835\udf0e\ud835\udc56+1, c) \ud835\udc53 \ud835\udf03(\ud835\udc33\ud835\udf0e\ud835\udc56, \ud835\udf0e\ud835\udc56,c) \u0ddc \ud835\udc33 Figure 3: An illustration of adversarial consistency training. where a is an adjustable constant, making the training more robust to outliers as it imposes a smaller penalty for large errors than \u21132 loss. The parameters \u03b8\u2212of teacher model are \u03b8\u2212\u2190 \u2212stopgrad(\u03b8), (13) which are identical to the student parameters \u03b8. This approach Song and Dhariwal (2023) has been demonstrated to improve sample quality of previous strategies that employ varying decay rates Song et al. (2023). The weighting function refers to \u03bb(\u03c3i) = 1 \u03c3i+1 \u2212\u03c3i (14) which emphasizes the loss of smaller noise levels. LCM through consistency training can generate speech with acceptable quality in a few steps, but it still falls short of previous methods. Therefore, to further enhance the quality of the generated samples, we integrate adversarial training. 3.3.2 Adversarial Training For the adversarial objective, the generated samples \u02c6 z \u2190f\u03b8(z\u03c3, \u03c3, c) and real samples z are passed to the discriminator D\u03b7 which aims to distinguish between them, where \u03b7 refers to the trainable parameters. Thus, we employ adversarial training loss Ladv(\u03b8, \u03b7) = Ez[log D\u03b7(z)] + E\u03c3Ez\u03c3[log(1 \u2212D\u03b7(f\u03b8(z\u03c3, \u03c3, c)))]. (15) In this way, the error signal from the discriminator guides f\u03b8 to produce more realistic outputs. For details, we use a frozen pre-trained speech language model SLM and a trainable lightweight discriminator head Dhead to build the discriminator. Since the current SLM is trained on the speech waveform, we covert both z and \u02c6 z to ground truth waveform and predicted waveform using the codec decoder. To further increase the similarity between prompt audio and generated audio, our discriminator is conditioned on the prompt audio feature. This prompt feature Fprompt is extracted using SLM on prompt audio and applies average pooling on the time axis. Therefore, D\u03b7 = Dhead(Fprompt \u2299Fgt, Fprompt \u2299Fpred) (16) where Fgt and Fpred refer to feature extracted through SLM for ground truth waveform and predicted waveform. The discriminator head consists of several 1D convolution layers. The input feature of the discriminator is conditioned on Fprompt via projection Miyato and Koyama (2018). 3.3.3 Combined Together Since there is a large gap on the loss scale between consistency loss and adversarial loss, it can lead to instability and failure in training. Therefore, we follow Esser et al. (2021) to compute the adaptive weight with \u03bbadv = \u2225\u2207\u03b8LLN ct (\u03b8, \u03b8\u2212)\u2225 \u2225\u2207\u03b8LLadv(\u03b8, \u03b7)\u2225 (17) where \u03b8L is the last layer of the neural network in LCM. The final loss of training LCM is defined as LN ct (\u03b8, \u03b8\u2212)+\u03bbadvLadv(\u03b8, \u03b7). This adaptive weighting significantly stabilizes the training by balancing the gradient scale of each term. 6 Prosody Regression Prosody Refinement Initial Prediction Residual + Prosody Feature Predicted Prosody Noise deterministic stochastic \ud835\udf36 \u2217Residual Figure 4: An illustration of prosody generator. 3.4 Prosody Generator 3.4.1 Analysis of Prosody Prediction Previous regression methods for prosody prediction Ren et al. (2020); Shen et al. (2024), due to their deterministic mappings and assumptions of unimodal distribution, often fail to capture the inherent diversity and expressiveness of human speech prosody. This leads to predictions that lack variation and can appear over-smoothed. On the other hand, diffusion methods Le et al. (2023); Li et al. (2023) for prosody prediction offer a promising alternative by providing greater prosody diversity. However, they come with challenges regarding stability, and the potential for unnatural prosody. Additionally, the iterative inference process in DMs requires a significant number of sampling steps that may also hinder real-time application. Meanwhile, LM-based methods Jiang et al. (2024a); Wang et al. (2023a) also need a long time for inference. To alleviate these issues, our prosody generator consists of a prosody regression module and a prosody refinement module to enhance the diversity of prosody regression results with efficient one-step consistency model sampling. 3.4.2 Prosody Refinement via Consistency Model As shown in 4, our prosody generator consists of two parts which are prosody regression and prosody refinement. We first train the prosody regression module to get a deterministic output. Next, we freeze the parameters of the prosody regression module and use the residual of ground truth prosody and deterministic predicted prosody as the training target for prosody refinement. We adopt a consistency model as a prosody refinement module. The conditional feature of the consistency model is the feature from prosody regression before the final projection layer. Thus, the residual from a stochastic sampler refines the output of a deterministic prosody regression and produces a diverse set of plausible prosody under the same transcription and audio prompt. One option for the final prosody output pfinal can be represented as: pfinal = pres + pinit, (18) where pfinal denotes the final prosody output, pres represents the residual output from the prosody refinement module, capturing the variations between the ground truth prosody and the deterministic prediction, pinit is the initial deterministic prosody prediction from the prosody regression module. However, this formulation may negatively affect prosody stability, a similar observation is found in Vyas et al. (2023); Le et al. (2023). More specifically, higher diversity may cause less stability and sometimes produce unnatural prosody. To address this, we introduce a control factor \u03b1 that finely tunes the balance between stability and diversity in the prosodic output: pfinal = \u03b1pres + pinit (19) where \u03b1 is a scalar value ranging between 0 and 1. This adjustment allows for controlled incorporation of variability into the prosody, mitigating issues related to stability while still benefiting from the diversity offered by the prosody refinement module. 3.5 Applications This section elaborates on the practical applications of FlashSpeech. We delve into its deployment across various tasks such as zero-shot TTS, speech editing, voice conversion, and diverse speech sampling. All the sample audios of applications are available on the demo page. 7 3.5.1 Zero-Shot TTS Given a target text and reference audio, we first convert the text to phoneme using g2p (grapheme-tophoneme conversion). Then we use the codec encoder to convert the reference audio into zprompt. Speech can be synthesized efficiently through FlashSpeech with the phoneme input and zprompt, achieving high-quality text-to-speech results without requiring pre-training on the specific voice. 3.5.2 Voice Conversion Voice conversion aims to convert the source audio into the target audio using the speaker\u2019s voice of the reference audio. Following Shen et al. (2024); Preechakul et al. (2022), we first apply the reverse of ODE to diffuse the source audio into a starting point that still maintains some information in the source audio. After that, we run the sampling process from this starting point with the reference audio as zprompt and condition c. The condition c uses the phoneme and duration from the source audio and the pitch is predicted by the prosody generator. This method allows for zero-shot voice conversion while preserving the linguistic content of the source audio, and achieving the same timbre as the reference audio. 3.5.3 Speech Editing Given the speech, the original transcription, and the new transcription, we first use MFA (Montreal Forced Aligner) to align the speech and the original transcription to get the duration of each word. Then we remove the part that needs to be edited to construct the reference audio. Next, we use the new transcription and reference to synthesize new speech. Since this task is consistent with the in-context learning, we can concatenate the remaining part of the raw speech and the synthesized part as the final speech, thus enabling precise and seamless speech editing. 3.5.4 Diverse Speech Sampling FlashSpeech leverages its inherent stochasticity to generate a variety of speech outputs under the same conditions. By employing stochastic sampling in its prosody generation and LCM, FlashSpeech can produce diverse variations in pitch, duration, and overall audio characteristics from the same phoneme input and audio prompt. This feature is particularly useful for generating a wide range of speech expressions and styles from a single input, enhancing applications like voice acting, synthetic voice variation for virtual assistants, and more personalized speech synthesis. In addition, the synthetic data via speech sampling can also benefit other tasks such as ASR Rossenbach et al. (2020). 4 Experiment Table 1: The evaluation results for FlashSpeech and the baseline methods on LibriSpeech testclean. \u22c6 means the evaluation is conducted with 1 NVIDIA V100 GPU. \u2662means the device is not available. Abbreviations: MLS (Multilingual LibriSpeech Pratap et al. (2020)), G (GigaSpeech Chen et al. (2021)), L (LibriTTS-R Koizumi et al. (2023)), V (VCTK Yamagishi et al. (2019)), LJ (LJSpeech Ito and Johnson (2017)), W (WenetSpeech Zhang et al. (2022)). Model Data RTF \u2193 Sim-O \u2191 Sim-R \u2191 WER \u2193 CMOS \u2191 SMOS \u2191 GroundTruth 0.68 1.9 0.11 4.39 VALL-E reproduce Librilight 0.62 \u2662 0.47 0.51 6.1 -0.48 4.11 NaturalSpeech 2 MLS 0.37 \u22c6 0.53 0.60 1.9 -0.31 4.20 Voicebox reproduce Librilight 0.66\u2662 0.48 0.50 2.1 -0.58 3.95 Mega-TTS G+W 0.39 \u2662 3.0 CLaM-TTS MLS+G+L +V+LJ 0.42 \u2662 0.50 0.54 5.1 FlashSpeech (ours) MLS 0.02 \u22c6 0.52 0.57 2.7 0.00 4.29 8 FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) FlashSpeech (RTF: 0.02) Voicebox (RTF: 0.64) Mega-TTS (RTF: 0.39) ClaM-TTS (RTF: 0.42) VALL-E (RTF: 0.62) NaturalSpeech 2 (RTF: 0.37) Voicebox (RTF: 0.64) Mega-TTS (RTF: 0.39) ClaM-TTS (RTF: 0.42) VALL-E (RTF: 0.62) NaturalSpeech 2 (RTF: 0.37) Audio Quality Speaker Similarity Figure 5: User preference study. We compare the audio quality and speaker similarity of FlashSpeech against baselines with their official demo. In the experimental section, we begin by introducing the datasets and the configurations for training in our experiments. Following this, we show the evaluation metrics and demonstrate the comparative results against various zero-shot TTS models. Subsequently, ablation studies are conducted to test the effectiveness of several design choices. Finally, we also validate the effectiveness of other tasks such as voice conversion. We show our speech editing and diverse speech sampling results on our demo page. 4.1 Experimental Settings 4.1.1 Data and Preprocessing We use the English subset of Multilingual LibriSpeech (MLS) Pratap et al. (2020), including 44.5k hours of transcribed audiobook data and it contains 5490 distinct speakers. The audio data is resampled at a frequency of 16kHz. The input text is transformed into a sequence of phonemes through grapheme-to-phoneme conversion Sun et al. (2019) and then we use our internal alignment tool aligned with speech to obtain the phoneme-level duration. We adopt a hop size of 200 for all frame-level features. The pitch sequence is extracted using PyWorld2. we adopt Encodec D\u00e9fossez et al. (2022) as our audio codec. We use a modified version 3 and train it on MLS. We use the dense features extracted before the residual quantization layer as our latent vector z. 4.1.2 Training Details Our training consists of two stages, in the first stage we train LCM and the prosody regression part. We use 8 H800 80GB GPUs with a batch size of 20k frames of latent vectors per GPU for 650k steps. We use the AdamW optimizer with a learning rate of 3e-4, warm up the learning rate for the first 30k updates and then linear decay it. We deactivate adversarial training with \u03bbadv = 0 before 600K training iterations. For hyper-parameters, we set a in Equation (12) to 0.03. In equation (10), \u03c3i = \u0010 \u03c31/\u03c1 min + i\u22121 N(k)\u22121 \u0010 \u03c31/\u03c1 max \u2212\u03c31/\u03c1 min \u0011\u0011\u03c1 , where i \u2208[1, N(k)], \u03c1 = 7, \u03c3min = 0.002, \u03c3max = 80. For N(k) in Equation (11), we set s0 = 10, s1 = 1280, K = 600k. After 600k steps, we activate adversarial loss, and N(k) can be considered as fixed to 1280. We crop the waveform length fed into the discriminator into minimum waveform length in a minibatch. In addition, the weight of the feature extractor WavLM and the codec decoder are frozen. In the second stage, we train 150k steps for the prosody refinement module with consistency training in Equation (10). Different from the above setting, we empirically set s1 = 160, K = 150k. During training, only the weight of the prosody refinement part is updated. 2https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder 3https://github.com/yangdongchao/UniAudio/tree/main/codec 9 4.1.3 Model Details The model structures of the prompt encoder and phoneme encoder are followShen et al. (2024). The neural function part in LCM is almost the same as the Shen et al. (2024). We rescale the sinusoidal position embedding in the neural function part by a factor of 1000. As for the prosody generator, we adopt 30 non-casual wavenet Oord et al. (2016) layers for the neural function part in the prosody refinement module and the same configurations for prosody regression parts in Shen et al. (2024). And we set \u03b1 = 0.2 for the prosody refinement module empirically. For the discriminator\u2019s head, we stack 5 convolutional layers with weight normalization Salimans and Kingma (2016) for binary classification. 4.2 Evaluation Metrics We use both objective and subjective evaluation metrics, including \u2022 RTF: Real-time-factor (RTF) measures the time taken for the system to generate one second of speech. This metric is crucial for evaluating the efficiency of our system, particularly for applications requiring real-time processing. We measure the time of our system end-to-end on an NVIDIA V100 GPU following Shen et al. (2024). \u2022 Sim-O and Sim-R: These metrics assess the speaker similarity. Sim-R measures the objective similarity between the synthesized speech and the reconstruction reference speech through the audio codec, using features embedding extracted from the pre-trained speaker verification model Wang et al. (2023a); Kim et al. (2024)4. Sim-O is calculated with the original reference speech. Higher scores in Sim-O and Sim-R indicate a higher speaker similarity. \u2022 WER (Word Error Rate): To evaluate the accuracy and clarity of synthesized speech from the TTS system, we employ the Automatic Speech Recognition (ASR) model Wang et al. (2023a) 5 to transcribe generated audio. The discrepancies between these transcriptions and original texts are quantified using the Word Error Rate (WER), a crucial metric indicating intelligibility and robustness. \u2022 CMOS, SMOS, UTMOS: we rank the comparative mean option score (CMOS) and similarity mean option score (SMOS) using mturk. The prompt for CMOS refers to \u2019Please focus on the audio quality and naturalness and ignore other factors.\u2019. The prompt for SMOS refers to \u2019Please focus on the similarity of the speaker to the reference, and ignore the differences of content, grammar or audio quality.\u2019 Each audio has been listened to by at least 10 listeners. UTMOS Saeki et al. (2022) is a Speech MOS predictor6 to measure the naturalness of speech. We use it in ablation studies which reduced the cost for evaluation. \u2022 Prosody JS Divergence: To evaluate the diversity and accuracy of the prosody prediction in our TTS system, we include the Prosody JS Divergence metric. This metric employs the Jensen-Shannon (JS) divergence Men\u00e9ndez et al. (1997) to quantify the divergence between the predicted and ground truth prosody feature distributions. Prosody features, including pitch, and duration, are quantized and their distributions in both synthesized and natural speech are compared. Lower JS divergence values indicate closer similarity between the predicted prosody features and those of the ground truth, suggesting a higher diversity of the synthesized speech. 4.3 Experimental Results on Zero-shot TTS Following Wang et al. (2023a), We employ LibriSpeech Panayotov et al. (2015) test-clean for zeroshot TTS evaluation. We adopt the cross-sentence setting in Wang et al. (2023a) that we randomly select 3-second clips as prompts from the same speaker\u2019s speech. The results are summarized in table 1 and figure 5. 4https://github.com/microsoft/UniSpeech/tree/main/downstreams/speaker_verification 5https://huggingface.co/facebook/hubert-large-ls960-ft 6https://github.com/tarepan/SpeechMOS 10 4.3.1 Evaluation Baselines \u2022 VALL-E Wang et al. (2023a): VALL-E predicts codec tokens using both AR and NAR models. RTF7 is obtained from Kim et al. (2024); Le et al. (2023). We use our reproduced results for MOS, Sim, and WER. Additionally, we do a preference test with their official demo. \u2022 Voicebox Le et al. (2023): Voicebox uses flow-matching to predict maksed mel-spectrogram. RTF is from the original paper. We use our reproduced results for MOS, Sim, and WER. We also implement a preference test with their official demo. \u2022 NaturalSpeech2 Shen et al. (2024): NaturalSpeech2 uses a latent diffusion model to predict latent features of codec. The RTF is from the original paper. the Sim, WER and samples for MOS are obtained through communication with the authors. We also do a preference test with their official demo. \u2022 Mega-TTS Jiang et al. (2023a)8: Mega-TTS uses both language model and GAN to predict mel-spectrogram. We obtain RTF from mobilespeech Ji et al. (2024) and WER from the original paper. We do a preference test with their official demo. \u2022 ClaM-TTS Kim et al. (2024): ClaM-TTS uses the AR model to predict mel codec tokens. We obtain the objective evaluation results from the original paper and do a preference test with their official demo. 4.3.2 Generation Quality FlashSpeech stands out significantly in terms of speaker quality, surpassing other baselines in both CMOS and audio quality preference tests. Notably, our method closely approaches ground truth recordings, underscoring its effectiveness. These results affirm the superior quality of FlashSpeech in speech synthesis. our method. 4.3.3 Generation Similarity Our evaluation of speaker similarity utilizes Sim, SMOS, and speaker similarity preference tests, where our methods achieve 1st, 2nd, and 3rd place rankings, respectively. These findings validate our methods\u2019 ability to achieve comparable speaker similarity to other methods. Despite our training data (MLS) containing approximately 5k speakers, fewer than most other methods (e.g., Librilight with about 7k speakers or self-collected data), we believe that increasing the number of speakers in our methods can further enhance speaker similarity. 4.3.4 Robustness Our methods achieve a WER of 2.7, placing them in the first echelon. This is due to the nonautoregressive nature of our methods, which ensures robustness. 4.3.5 Generation Speed FlashSpeech achieves a remarkable approximately 20x faster inference speed compared to previous work. Considering its excellent audio quality, robustness, and comparable speaker similarity, our method stands out as an efficient and effective solution in the field of large-scale speech synthesis. 4.4 Ablation Studies 4.4.1 Ablation studies of LCM We explored the impact of different pre-trained models in adversarial training on UTMOS and Sim-O. As shown in the table 2, the baseline, which employs consistency training alone, achieved a UTMOS 7In CLaM-TTS and Voicebox, they report the inference time for generating 10 seconds of speech. Therefore, we divide by 10 to obtain the time for generating 1 second of speech (RTF). 8Since we do not find any audio samples for Mega-TTS2 Jiang et al. (2024b) under the 3-second crosssentence setting, we are not able to compare with them. 11 Table 2: The ablation study of discriminator design. Method UTMOS \u2191 Sim-O \u2191 Consistency training baseline 3.62 0.45 + Adversarial training (Wav2Vec2-large) 3.92 0.50 + Adversarial training (Hubert-large) 3.83 0.47 + Adversarial training (Wavlm-large) 4.00 0.52 prompt projection 3.97 0.51 Table 3: The ablation study of sampling steps for LCM NFE UTMOS \u2191 Sim-O \u2191 1 3.99 0.51 2 4.00 0.52 4 3.91 0.51 of 3.62 and a Sim-O of 0.45. Incorporating adversarial training using wav2vec2-large9, hubert-large10, and wavlm-large11 as discriminators significantly improved both UTMOS and Sim-O scores. Notably, the application of adversarial training with Wavlm-large achieved the highest scores (UTMOS: 4.00, Sim-O: 0.52), underscoring the efficacy of this pre-trained model in enhancing the quality and speaker similarity of synthesized speech. Additionally, without using the audio prompt\u2019s feature as a condition the discriminator shows a slight decrease in performance (UTMOS: 3.97, Sim-O: 0.51), highlighting the importance of conditional features in guiding the adversarial training process. As shown in table 3, the effect of sampling steps (NFE) on UTMOS and Sim-O revealed that increasing NFE from 1 to 2 marginally improves UTMOS (3.99 to 4.00) and Sim-O (0.51 to 0.52). However, further increasing to 4 sampling steps slightly reduced UTMOS to 3.91 due to the accumulation of score estimation errors Chen et al. (2022a); Lyu et al. (2024). Therefore, we use 2 steps as the default setting for LCM. 4.4.2 Ablation studies of Prosody Generator In this part, we investigated the effects of a control factor, denoted as \u03b1, on the prosodic features of pitch and duration in speech synthesis, by setting another influencing factor to zero. Our study specifically conducted an ablation analysis to assess how \u03b1 influences these features, emphasizing its critical role in balancing stability and diversity within our framework\u2019s prosodic outputs. Table 4 elucidates the effects of varying \u03b1 on the pitch component. With \u03b1 set to 0, indicating no inclusion of the residual output from prosody refinement, we observed a Pitch JSD of 0.072 and a WER of 2.8. A slight modification to \u03b1 = 0.2 resulted in a reduced Pitch JSD of 0.067, maintaining the same WER. Notably, setting \u03b1 to 1, fully incorporating the prosody refinement\u2019s residual output, further decreased the Pitch JSD to 0.063, albeit at the cost of increased WER to 3.7, suggesting a trade-off between prosody diversity and speech intelligibility. Similar trends in table 5 are observed in the duration component analysis. With \u03b1 = 0, the Duration JSD was 0.0175 with a WER of 2.8. Adjusting \u03b1 to 0.2 slightly improved the Duration JSD to 0.0168, without affecting WER. However, fully embracing the refinement module\u2019s output by setting \u03b1 = 1 yielded the most significant improvement in Duration JSD to 0.0153, which, similar to pitch analysis, came with an increased WER of 3.9. The results underline the delicate balance required in tuning \u03b1 to optimize between diversity and stability of prosody without compromising speech intelligibility. 9https://huggingface.co/facebook/wav2vec2-large 10https://huggingface.co/facebook/hubert-large-ll60k 11https://huggingface.co/microsoft/wavlm-large 12 Table 4: The ablation study of control factor for pitch \u03b1 Pitch JSD \u2193 WER\u2193 0 0.072 2.8 0.2 0.067 2.8 1 0.063 3.7 Table 5: The ablation study of control factor for duration \u03b1 Duration JSD \u2193 WER \u2193 0 0.0175 2.8 0.2 0.0168 2.8 1 0.0153 3.9 4.5 Evaluation Results for Voice Conversion In this section, we present the evaluation results of our voice conversion system, FlashSpeech, in comparison with state-of-the-art methods, including YourTTS 12 Casanova et al. (2022) and DDDMVC 13 Choi et al. (2024). We conduct the experiments with their official checkpoints in our internal test set. Table 6: Voice Conversion Method CMOS \u2191 SMOS \u2191 Sim-O \u2191 YourTTS Casanova et al. (2022) -0.16 3.26 0.23 DDDM-VC Choi et al. (2024) -0.28 3.43 0.28 Ours 0.00 3.50 0.35 Our system outperforms both YourTTS and DDDM-VC in terms of CMOS, SMOS and Sim-O, demonstrating its capability to produce converted voices with high quality and similarity to the target speaker. These results confirm the effectiveness of our FlashSpeech approach in voice conversion tasks. 4.6 Conclusions and Future Work In this paper, we presented FlashSpeech, a novel speech generation system that significantly reduces computational costs while maintaining high-quality speech output. Utilizing a novel adversarial consistency training method and an LCM, FlashSpeech outperforms existing zero-shot TTS systems in efficiency, achieving speeds about 20 times faster without compromising on voice quality, similarity, and robustness. In the future, we aim to further refine the model to improve the inference speed and reduce computational demands. In addition, we will expand the data scale and enhance the system\u2019s ability to convey a broader range of emotions and more nuanced prosody. For future applications, FlashSpeech can be integrated for real-time interactions in applications such as virtual assistants and educational tools. 12https://github.com/coqui-ai/TTS 13https://github.com/hayeong0/DDDM-VC 13" + } + ] +} \ No newline at end of file