diff --git "a/intro_28K/test_introduction_long_2405.04272v1.json" "b/intro_28K/test_introduction_long_2405.04272v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.04272v1.json" @@ -0,0 +1,103 @@ +{ + "url": "http://arxiv.org/abs/2405.04272v1", + "title": "BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models", + "abstract": "In this paper, we present an unsupervised single-channel method for joint\nblind dereverberation and room impulse response estimation, based on posterior\nsampling with diffusion models. We parameterize the reverberation operator\nusing a filter with exponential decay for each frequency subband, and\niteratively estimate the corresponding parameters as the speech utterance gets\nrefined along the reverse diffusion trajectory. A measurement consistency\ncriterion enforces the fidelity of the generated speech with the reverberant\nmeasurement, while an unconditional diffusion model implements a strong prior\nfor clean speech generation. Without any knowledge of the room impulse response\nnor any coupled reverberant-anechoic data, we can successfully perform\ndereverberation in various acoustic scenarios. Our method significantly\noutperforms previous blind unsupervised baselines, and we demonstrate its\nincreased robustness to unseen acoustic conditions in comparison to blind\nsupervised methods. Audio samples and code are available online.", + "authors": "Eloi Moliner, Jean-Marie Lemercier, Simon Welker, Timo Gerkmann, Vesa V\u00e4lim\u00e4ki", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "When acoustic waves propagate in enclosures and get reflected by walls, the sound received is perceived as reverberated, which can significantly degrade speech intelligibility and quality [1]. The goal of dereverberation is to recover the anechoic component from rever- berant speech. We focus here on the single-channel scenario, where measurements from only one microphone are available, which is sig- nificantly more challenging than multi-channel scenarios [2]. Traditional dereverberation algorithms assume some statistical properties, such as Gaussianity or sparsity, about the anechoic and reverberant signals. These properties are leveraged to perform dere- verberation in the time, spectral or cepstral domain [3]. These meth- ods can tackle informed scenarios, where the room impulse response (RIR) is known [4, 5] as well as blind scenarios where the RIR is unknown [6, 7]. Informed dereverberation is easier than blind dere- verberation, but most scenarios in real-life applications are blind, as the RIR is either not measured beforehand, or becomes invalid even with the slightest deviations in receiver or emitter positions. Data-driven approaches rely less on such assumptions but rather learn the signal properties and structures from data [8]. Most of these methods are based on supervised learning using pairs of anechoic and reverberant speech. Supervised predictive mod- els have been widely used for blind dereverberation, including time-frequency (T-F) maskers [9], time-domain methods [10] and \u2217These authors contributed equally to this work. 1uhh.de/sp-inf-buddy. spectro-temporal mapping [11]. Generative models represent an- other category of dereverberation algorithms aiming to learn the dis- tribution of anechoic speech conditioned on reverberant input. Some blind supervised methods using generative models such as diffusion models [12,13] have been recently proposed [14,15]. However, su- pervised approaches struggle with limited generalization to diverse acoustic conditions due to the scarcity and variability of available RIR data. Unsupervised approaches offer the potential to circumvent such limitations as they do not require paired anechoic/reverberant data. This paper builds upon prior work [16], which proposed an unsupervised method for informed single-channel dereverberation based on diffusion posterior sampling. The previous study showed the potential of leveraging diffusion models as a strong clean speech prior, which, when combined with a criterion to match the measure- ment, reached state-of-the-art dereverberation in an informed sce- nario [16]. This paper extends the method to blind dereverberation, where the unknown RIR is estimated along the anechoic speech. We parameterize the RIR with a model-based subband filter, where each subband of the reverberation filter is modeled by an exponentially decaying signal. The resulting algorithm is an optimization scheme alternating between the diffusion process generating the anechoic speech, and the parameter search estimating the acoustic conditions. Previous works in related domains explore various parameter estimation techniques for solving blind inverse problems with dif- fusion posterior sampling. For image deblurring, [17] propose to use a parallel diffusion process to estimate the deblurring kernel, while [18] adopts an expectation-maximization approach. In the au- dio domain, [19] address the problem of blind bandwidth extension by iteratively refining the parameters of the lowpass filter degrada- tion. Closely related is the work by Saito et al. [20], which per- form unsupervised blind dereverberation using DDRM [21] and the weighted-prediction error (WPE) algorithm as initialization [6]. We name our method BUDDy for Blind Unsupervised Derever- beration with Diffusion Models. We show experimentally that BUDDy efficiently removes reverberation from speech utterances in many acoustic scenarios, thereby largely outperforming previous blind unsupervised techniques. As supervision is not required dur- ing the training phase, we demonstrate that BUDDy does not lose performance when presented with unseen acoustic conditions, as opposed to existing blind supervised dereverberation approaches.", + "main_content": "2.1. Diffusion-Based Generative Models Diffusion-based generative models, or simply diffusion models [12, 22], emerged as a class of generative models that learn complex data distributions via iterative denoising. At training time, the target data arXiv:2405.04272v1 [eess.AS] 7 May 2024 distribution is transformed into a tractable Gaussian distribution by a forward process, incrementally adding noise. During the inference, the reverse process refines an initial noise sample into a data sample, by progressively removing noise. The reverse diffusion process, which transports noise samples from a Gaussian prior to the data distribution pdata, can be characterized by the following probability flow ordinary differential equation (ODE): dx\u03c4 = [f(x\u03c4, \u03c4) \u22121 2g(\u03c4)2\u2207x\u03c4 log p(x\u03c4)]d\u03c4, (1) where \u03c4 indexes the diffusion steps flowing in reverse from Tmax to 0. The current diffusion state x\u03c4 starts from the initial condition xTmax \u223cN(0, \u03c3(Tmax)2I) and ends at x0 \u223cpdata. We adopt the variance exploding parameterization of Karras et al. [23], where the drift and diffusion are defined as f(x\u03c4, \u03c4) = 0 and g(\u03c4) = \u221a 2\u03c4, respectively. Similarly, we adopt \u03c3(\u03c4) = \u03c4 as the noise variance schedule, which defines the so-called transition kernel i.e. the marginal densities: p\u03c4(x\u03c4|x0) = N(x\u03c4; x0, \u03c3(\u03c4)2I). The score function \u2207x\u03c4 log p(x\u03c4) is intractable at inference time as we do not have access to x0. In practice, a score model parameterized with a deep neural network s\u03b8(x\u03c4, \u03c4) is trained to estimate the score function using a denoising score matching objective [24]. 2.2. Diffusion Posterior Sampling for Dereverberation Single-channel dereverberation can be considered as the inverse problem of retrieving the anechoic utterance x0 \u2208RL from the reverberant measurement y \u2208RL, which is often modelled by convolving the anechoic speech with an RIR h \u2208RLh, expressed as y = h \u2217x0. We aim to solve this inverse problem by sampling from the posterior distribution p(x0|y, h) of anechoic speech given the measurement and the RIR. We adopt diffusion models for this posterior sampling task by replacing the score function \u2207x\u03c4 log p(x\u03c4) in (1) by the posterior score \u2207x\u03c4 log p(x\u03c4|y, h) [13]. Applying Bayes\u2019 rule, the posterior score is obtained as \u2207x\u03c4 log p(x\u03c4|y, h) = \u2207x\u03c4 log p(x\u03c4) + \u2207x\u03c4 log p(y|x\u03c4, h), (2) where the first term, or prior score, can be approximated with a trained score model s\u03b8(x\u03c4, \u03c4) \u2248\u2207x\u03c4 log p(x\u03c4). The likelihood p(y|x\u03c4, h) is generally intractable because we lack a signal model for y given the diffusion state x\u03c4. We will introduce in the next section a series of approximations to make its computation tractable. 3. METHODS 3.1. Likelihood Score Approximation In order to obtain a tractable likelihood computation, we posit as in [25] that a one-step denoising estimate of x0 at time \u03c4 can serve as a sufficient statistic for x\u03c4 in this context, i.e. that p(y|x\u03c4, h) \u2248 p(y|\u02c6 x0, h). Such estimate \u02c6 x0 can be obtained using the score model: \u02c6 x0 \u2206 = \u02c6 x0(x\u03c4, \u03c4) = x\u03c4 \u2212\u03c3(\u03c4)2s\u03b8(x\u03c4, \u03c4). (3) Furthermore, we consider here that the convolution model remains valid when using this denoised estimate, and therefore that p(y|\u02c6 x0, h) \u2248p(y|\u02c6 x0\u2217h). Finally, we model the estimation error as following a Gaussian distribution in the compressed STFT domain. p(y|\u02c6 x0 \u2217h) = N(Scomp(y); Scomp(\u02c6 x0 \u2217h), \u03b72I), (4) where Scomp(y) = |STFT(y)|2/3 exp{j\u2220STFT(y)} is the compressed spectrogram. We apply this compression to account for the heavy-tailedness of speech distributions [26]. With this series of approximations, we obtain the following likelihood score: \u2207x\u03c4 log p(y|x\u03c4, h) \u2248\u2212\u03b6(\u03c4)\u2207x\u03c4 C(y, h \u2217\u02c6 x0), (5) where the function C(\u00b7, \u00b7) is defined as: C(y, \u02c6 y) = 1 M M X m=1 K X k=1 \u2225Scomp(y)m,k \u2212Scomp(\u02c6 y)m,k\u22252 2. (6) The weighting parameter \u03b6(\u03c4) controls the trade-off between adherence to the prior data distribution and fidelity to the observed data. According to our Gaussian assumption (4), its theoretical value should depend on the unknown variance \u03b7 as \u03b6(\u03c4) = 1/2\u03b72. In practice, we resort to the same parameterization as in [19,27]. 3.2. Reverberation Operator The employed reverberation operator relies on a subband filtering approximation [28], which is applied within the Short-Time Fourier Transform (STFT) domain. Let H := STFT(h) \u2208CNh\u00d7K represent the STFT of an RIR h with Nh time frames and K frequency bins. Similarly, let X \u2208CM\u00d7K, and Y \u2208CM+Nh\u22121\u00d7K, denote the STFTs of anechoic x0 and reverberant y speech signals, repectively. The subband convolution operation applies independent convolutions along the time dimension of each frequency band: Ym,k = Nh X n=0 Hn,kXm\u2212n,k. (7) In the blind scenario, we need to estimate H, which is an arduous task without knowledge of the anechoic speech. We constrain the space of possible solutions by designing a structured, differentiable RIR prior whose parameters \u03c8 can be estimated through gradient descent. We denote the complete forward reverberation operator, including forward and inverse STFT, as A\u03c8(\u00b7) : RL \u2192RL. We denote as A \u2208RNh\u00d7K and \u03a6 \u2208RNh\u00d7K the RIR magnitudes and phases of H, respectively. We parameterize the magnitude matrix A as a multi-band exponential decay model defined in B < K frequency bands. Let A\u2032 \u2208RNh\u00d7B be the subsampled version of A in the B selected frequency bands. Each frequency band b is characterized by its weight wb and exponential decay rate \u03b1b, such that the corresponding subband magnitude filter can be expressed as: A\u2032 n,b = wbe\u2212\u03b1bn. (8) Once the weights and decay rates parameters are estimated, we reconstruct the magnitudes A by interpolating the subsampled A\u2032 using A = exp(lerp(log(A\u2032))), where lerp represents linear interpolation of the frequencies. Given the lack of structure of RIR phases, we perform independent optimization for each phase factor in \u03a6. The resulting set of parameters to optimize is therefore \u03c8 = {\u03a6, (wb, \u03b1b)b=1,...,B}. After each optimization step, the estimated time-frequency RIR H is further processed through a projection step: H = STFT(\u03b4 \u2295Pmin(iSTFT(H))). (9) This operation primarily ensures STFT consistency [29] of H. We additionally include a projection Pmin that ensures the time domain RIR has minimum phase lag to guarantee a stable inverse filter, using the Hilbert transform method [30]. Finally, to make the directto-reverberation ratio only depend on the late reverberation and to xN \u03c8N xn \u03c8n Score Model s\u03b8(xn, \u03c3n) \u02c6 x0 RIR Optimization \u00d7Nits. Posterior Sampling Step LH Score Approx. \u2212\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n(\u02c6 x0)) xn\u22121 \u03c8n\u22121 x0 \u03c80 Fig. 1: Blind unsupervised dereverberation alternating between RIR estimation and posterior sampling for speech reconstruction. enforce further constraints on \u03c8 for a more stable optimization, we take the direct path to be at the first sample and with amplitude one. This is achieved by replacing the first sample of the time-domain RIR with a unit impulse, as indicated by the operation \u03b4 \u2295(\u00b7). 3.3. Blind Dereverberation Inference The inference process solves the following objective: \u02c6 x0, \u02c6 \u03c8 = arg min x0,\u03c8 C(y, A\u03c8(x0)) + R(\u03c8), s.t. x0 \u223cpdata. (10) This objective seeks to find the optimal speech \u02c6 x0 and RIR parameters \u02c6 \u03c8 that minimize the reconstruction error C(y, A\u03c8(x0)) while also incorporating a regularization term R(\u03c8). An essential aspect is the constraint x0 \u223cpdata, which ensures that the estimated signal \u02c6 x0 adheres to the distribution pdata of anechoic speech samples. This constraint is implemented in a soft manner by leveraging a pretrained score model s\u03b8(x\u03c4, \u03c4) trained on anechoic speech. The inference algorithm is outlined in Algorithm 1 and visualized in Fig. 1, using the discretization further described in Eq. (12). The algorithm employs the likelihood score approximation from Sec. 3.1, but replacing the convolution with the the reverberation operator A\u03c8(\u00b7), while its parameters \u03c8 are optimized in parallel with the speech signal through gradient descent. We introduce in (10) a noise regularization term R(\u03c8): R(\u03c8) = 1 Nh Nh X l=1 K X k=1 \u2225Scomp(\u02c6 h\u03c8)l,k \u2212Scomp(\u02c6 h\u03c8\u2032 + \u03c3\u2032v)l,k\u22252 2, (11) where \u02c6 h\u03c8 = A\u03c8(\u03b4) represents the estimated RIR in the waveform domain, v \u223cN(0, I) is a vector of white Gaussian noise, and \u02c6 h\u03c8\u2032 is a copy of the current estimate of \u02c6 h\u03c8, such that the arg min in (10) does not apply to it. In code, this is analogous to detaching the gradients of \u02c6 h\u03c8 using a stop grad operator. We adopt an annealed schedule for the noise level \u03c3\u2032(\u03c4), resembling the score model schedule \u03c3(\u03c4) but with different hyper-parameters. This regularization term injects noise in the RIR parameter gradients, with decreasing noise power, which enables a wider and smoother exploration while allowing for convergence toward the end of the optimization. 4. EXPERIMENTAL SETUP 4.1. Data We use VCTK [34] as clean speech, selecting 103 speakers for training, 2 for validation and 2 for testing. We curate recorded RIRs Algorithm 1 Inference algorithm Require: reverberant speech y xinit \u2190WPE(y) Sample xN \u223cN(xinit, \u03c32 NI) \u25b7Warm initialization Initialize \u03c8N \u25b7Initialize the RIR parameters for n \u2190N, . . . , 1 do \u25b7Discrete step backwards sn \u2190s\u03b8(xn, \u03c4n) \u25b7Evaluate score model \u02c6 x0 \u2190xn \u2212\u03c32 nsn \u25b7Get one-step denoising estimate \u02c6 x0 \u2190Rescale(\u02c6 x0) \u03c80 n\u22121 \u2190\u03c8n \u25b7Use the RIR parameters from last step for j \u21900, . . . , Nits. do \u25b7RIR optimization JRIR(\u03c8j n\u22121) \u2190C(y, A\u03c8j n\u22121(\u02c6 x0)) + R(\u03c8j n\u22121) \u03c8j+1 n\u22121 \u2190\u03c8j n\u22121 \u2212Adam(JRIR(\u03c8j n\u22121)) \u25b7Optim. step \u03c8j+1 n\u22121 \u2190project(\u03c8j+1 n\u22121) \u25b7Projection step \u03c8n\u22121 \u2190\u03c8M n\u22121 gn \u2190\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n\u22121(\u02c6 x0)) \u25b7LH score approx. xn\u22121 \u2190xn \u2212\u03c3n(\u03c3n\u22121 \u2212\u03c3n)(sn + gn) \u25b7Update step return x0 \u25b7Reconstructed audio signal from various public datasets (please visit our code repository for details). In total we obtain approximately 10,000 RIRs, and split them between training, validation, and testing using ratios 0.9, 0.05, and 0.05, respectively. The training and validation sets are only used to train the baselines which require coupled reverberant/anechoic data. All data is resampled at 16 kHz. 4.2. Baselines We compare our method BUDDy to several blind supervised baselines such as NCSN++M [31] and diffusion-based SGMSE+ [14] and StoRM [15]. We also include blind unsupervised approaches leveraging traditional methods such as WPE [6] and Yohena et al. [7], as well as diffusion models Saito et al. [20] and GibbsDDRM [33] with code provided by the authors. For WPE, we take 5 iterations, a filter length of 50 STFT frames (400 ms) and a delay of 2 STFT frames (16 ms). 4.3. Hyperparameters and Training Configuration Data representation: We train the score model s\u03b8 using only the anechoic data from VCTK. For training, 4-s segments are randomly extracted from the utterances. Using publicly available code, the blind supervised models NCSN++M [31], SGMSE+ [14] and StoRM [15] are trained using coupled reverberant/anechoic speech, where the reverberant speech is obtained by convolving the anechoic speech from VCTK with the normalized RIRs. Reverberation operator: For all methods, STFTs are computed using a Hann window of 32 ms and a hop size of 8 ms. For subband filtering, we further employ 50% zero-padding to avoid aliasing artifacts. Given our sampling rate of fs = 16 kHz, this results in K = 513 frequency bins. We set the number of STFT frames of our operator to Nh = 100 (800 ms). We subsample the frequency scale in B = 26 bands, with a 125-Hz spacing between 0 and 1 kHz, a 250-Hz spacing between 1 and 3 kHz, and a 500-Hz spacing between 3 and 8 kHz. We optimize the RIR parameters \u03c8 with Adam, where the learning rate is set to 0.1, the momentum parameters to \u03b21 = 0.9, and \u03b22 = 0.99, and Nits. = 10 optimization iterations per diffusion step. We constrain the weights wb between 0 and 40 dB, Table 1: Dereverberation results obtained on VCTK-based reverberant datasets. Values indicate mean and standard deviation. We indicate for each method in the table if is blind (i.e. have no knowledge of the RIR) and/or unsupervised. Boldface numbers indicate best performance for supervised and unsupervised methods separately. For all metrics, higher is better. Matched Mismatched Method Blind Unsup. DNS-MOS PESQ ESTOI DNS-MOS PESQ ESTOI Reverberant 3.14 \u00b1 0.52 1.61 \u00b1 0.37 0.50 \u00b1 0.14 3.05 \u00b1 0.47 1.57 \u00b1 0.29 0.47 \u00b1 0.11 RIF+Post [5] \u2717 \u2713 3.41 \u00b1 0.47 2.66 \u00b1 0.40 0.76 \u00b1 0.09 3.55 \u00b1 0.45 2.86 \u00b1 0.31 0.78 \u00b1 0.09 InfDerevDPS [16] \u2717 \u2713 3.91 \u00b1 0.35 3.77 \u00b1 0.41 0.83 \u00b1 0.09 3.92 \u00b1 0.32 3.69 \u00b1 0.31 0.84 \u00b1 0.08 NCSN++M [31] \u2713 \u2717 3.75 \u00b1 0.38 2.85 \u00b1 0.55 0.80 \u00b1 0.10 3.61 \u00b1 0.39 2.08 \u00b1 0.47 0.64 \u00b1 0.09 SGMSE+M [14,31] \u2713 \u2717 3.88 \u00b1 0.32 2.99 \u00b1 0.48 0.78 \u00b1 0.09 3.74 \u00b1 0.34 2.48 \u00b1 0.47 0.69 \u00b1 0.09 StoRM [15] \u2713 \u2717 3.90 \u00b1 0.33 3.33 \u00b1 0.48 0.82 \u00b1 0.10 3.83 \u00b1 0.32 2.51 \u00b1 0.53 0.67 \u00b1 0.09 Yohena and Yatabe [7] \u2713 \u2713 2.99 \u00b1 0.56 1.80 \u00b1 0.33 0.55 \u00b1 0.12 2.94 \u00b1 0.44 1.71 \u00b1 0.29 0.51 \u00b1 0.10 WPE [32] \u2713 \u2713 3.24 \u00b1 0.54 1.81 \u00b1 0.42 0.57 \u00b1 0.14 3.10 \u00b1 0.48 1.74 \u00b1 0.37 0.54 \u00b1 0.12 Saito et al. [20] \u2713 \u2713 3.22 \u00b1 0.56 1.68 \u00b1 0.40 0.51 \u00b1 0.13 3.12 \u00b1 0.52 1.70 \u00b1 0.33 0.52 \u00b1 0.10 GibbsDDRM [33] \u2713 \u2713 3.33 \u00b1 0.53 1.70 \u00b1 0.37 0.51 \u00b1 0.13 3.30 \u00b1 0.52 1.75 \u00b1 0.36 0.52 \u00b1 0.11 BUDDy (proposed) \u2713 \u2713 3.76 \u00b1 0.41 2.30 \u00b1 0.53 0.66 \u00b1 0.12 3.74 \u00b1 0.38 2.24 \u00b1 0.54 0.65 \u00b1 0.12 and the decays \u03b1b between 0.5 and 28. This prevents the optimization from approaching degenerate solutions at early sampling stages. Furthermore, we rescale the denoised estimate \u02c6 x0 at each step to match the empirical dataset standard deviation \u03c3data = 5 \u00b7 10\u22122, so as to enforce a constraint on the absolute magnitudes of \u02c6 h\u03c8 and \u02c6 x0. Forward and reverse diffusion We set the extremal diffusion times to Tmax = 0.5 and Tmin = 10\u22124. For reverse diffusion, we follow Karras et al. [23] and employ a discretization of the diffusion time axis using N = 200 steps according to: \u2200n < N, \u03c4n = \u03c3n = \u0012 T 1/\u03c1 max + n N \u22121(T n/\u03c1 min \u2212T 1/\u03c1 max) \u0013\u03c1 , (12) with warping \u03c1 = 10. We use the second-order Euler-Heun stochastic sampler in [23] with Schurn = 50 and \u03b6\u2032 = 0.5 (prior scaling, see [27]), and the initial point xinit is taken to be the output of WPE [6] (with same parameters as the WPE baseline) plus Gaussian noise with standard deviation \u03c3 = Tmax. The annealing schedule \u03c3\u2032(\u03c4) in the noise regularization term in (11) is the same as the diffusion noise schedule \u03c3(\u03c4) but we bound it between extremal values \u03c3\u2032 min = 5 \u00d7 10\u22124 and \u03c3\u2032 max = 10\u22122. Network architecture: To remain consistent with [16], the unconditional score network architecture is NCSN++M [15, 31], a lighter variant of the NCSN++ [13] with 27.8M parameters instead of 65M. Training configuration: We adopt Adam as the optimizer to train the unconditional score model, with a learning rate of 10\u22124 and an effective batch size of 16 for 190k steps. We track an exponential moving average of the DNN weights with a decay of 0.999. Evaluation metrics: We assess the quality and intelligibility of speech using the intrusive Perceptual Evaluation of Speech Quality (PESQ) [35] and extended short-term objective intelligibility (ESTOI) [36]. We also employ the non-intrusive DNS-MOS [37], as a DNN-based mean opinion score (MOS) approximation. 5. RESULTS AND DISCUSSION Table 1 shows the dereverberation results for all baselines and indicates whether each approach is blind and/or unsupervised. We included the results for RIF+Post [5] and InfDerevDPS [16] in the informed scenario to show the upper bound of dereveberation quality one can achieve with perfect knowledge of the room acoustics. We use the same score model s\u03b8 and cost function C(\u00b7, \u00b7) for InfDerevDPS [16] as for BUDDy. Blind supervised approaches NCSN++M, SGMSE+M, and StoRM largely profit from the supervision during training, and boast a better performance compared to the unsupervised methods. However, in the mismatched setting, their performance dwindles because of their limited generalizability. In contrast, the proposed method BUDDy benefits from unsupervised training, and therefore, modifying the acoustic conditions does not impact performance at all: typically NCSN++M loses 0.78 PESQ by switching from the matched case to the mismatched case, where BUDDy loses 0.06. Our method then outperforms NCSN++M and comes within reach of other supervised approaches, although the generative nature of SGMSE+ and StoRM allow them to retain a relatively high generalization ability. We also observe that the traditional blind unsupervised methods such as WPE [6] and Yohena and Yatabe [7] can only perform limited dereverberation, as they do not benefit from the strong anechoic speech prior that learning-based methods parameterized with deep neural networks offer. Finally, we note that BUDDy performs significantly better on all metrics than the diffusion-based blind unsupervised baselines Saito et al. [20] and GibbsDDRM [33], as these perform mild dereverberation in the presented acoustic conditions, where the input direct-to-reverberant ratio is significanty lower than in the authors\u2019 setup. 6. CONCLUSIONS This paper presents BUDDy, the first unsupervised method simultaneously performing blind dereverberation and RIR estimation using diffusion posterior sampling. BUDDy significantly outperforms traditional and diffusion-based unsupervised blind approaches. Unlike blind supervised methods, which often struggle with generalization to unseen acoustic conditions, our unsupervised approach overcomes this limitation due to its ability to adapt the reverberation operator to a broad range of room impulse responses. While blind supervised methods outperform our approach when the tested conditions match those at training time, our method is on par or even outperforms some supervised baselines in a mismatched setting. 7.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.15189v1", + "title": "Text2Grasp: Grasp synthesis by text prompts of object grasping parts", + "abstract": "The hand plays a pivotal role in human ability to grasp and manipulate\nobjects and controllable grasp synthesis is the key for successfully performing\ndownstream tasks. Existing methods that use human intention or task-level\nlanguage as control signals for grasping inherently face ambiguity. To address\nthis challenge, we propose a grasp synthesis method guided by text prompts of\nobject grasping parts, Text2Grasp, which provides more precise control.\nSpecifically, we present a two-stage method that includes a text-guided\ndiffusion model TextGraspDiff to first generate a coarse grasp pose, then apply\na hand-object contact optimization process to ensure both plausibility and\ndiversity. Furthermore, by leveraging Large Language Model, our method\nfacilitates grasp synthesis guided by task-level and personalized text\ndescriptions without additional manual annotations. Extensive experiments\ndemonstrate that our method achieves not only accurate part-level grasp control\nbut also comparable performance in grasp quality.", + "authors": "Xiaoyun Chang, Yi Sun", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Modeling hand grasps have recently gained extensive at- tention due to its wide applications in human-computer in- teraction [24], virtual reality [11, 39], and imitation learn- ing in robotics [12]. To predict plausible human-like grasp poses when given an object, many hand-object interac- tion datasets [4, 5, 8, 9] has been built to promote research [19, 20, 32] on learning human experience in recent years. However, these works concentrate on stable grasps that are not suitable for task-oriented grasps. Different tasks neces- sitate specific types of grasps. For instance, in a cutting task, people typically grasp a knife by its handle rather than the blade. Similarly, when handing over a knife, it is safer for the deliverer to hold the blade, minimizing the risk of injury to the receiver. Consequently, controllable grasp synthesis *cgsmalcloud83@mail.dlut.edu.cn \u2020Corresponding author, lslwf@dlut.edu.cn Input Output Input Output Template Text Personalized Text Task-level Text Grasp the bar of the eyeglasses. Grasp the cap of the headphones. I want to drink using mug. Hold onto the bottle\u2019s cap firmly. Seize the pen by its body. I want to cut apple using knife. Input Output Input Output Input Output Input Output Figure 1. Given an object, Text2Grasp can generate specific hand grasps by interpreting various text inputs: a) Template text. b) Personalized text. c) Task-level text. is of paramount important. To facilitate controllable synthesis, many studies [2, 14, 32, 41] introduce various datasets of grasp containing dif- ferent numbers of human intentions, such as use, pass, twist and so on. Furthermore, [41] and [14] translate these inten- tions into one-hot embeddings, combining them with object point cloud feature to achieve intention-guided grasp syn- thesis. Considering that language is more natural mode of interaction, some studies [23, 31, 34] start to employ task- level text descriptions as inputs for predicting 6-Dof pose of parallel jaw gripper. However, utilizing fixed set of in- tentions or task-level text descriptions for grasping inher- ently faces ambiguity, primarily in two aspects: 1) Same intention but different grasps. For instance, \u201dlifting a mug\u201d may involve different grasp types of either grasping the han- dle or the body. 2) Different intentions but same grasps. For instance, \u201dlifting\u201d or\u201d twisting\u201d might have same initial grasping pose to hold a bottle\u2019s neck. Such complexities arXiv:2404.15189v1 [cs.AI] 9 Apr 2024 increase the difficulty in annotating datasets and achieving model convergence. To overcome these limitations, we propose a grasp syn- thesis method abbreviated as Text2Grasp that is guided by text prompts of object grasping parts, rather than inten- tions or task descriptions that are unable to explicitly in- dicate which part of the object to grasp. Text2Grasp takes an object and a predefined text template: Grasp the [Ob- ject Part] of the [Object Category] as input, and generates a grasp pose targeting the specified part of the object for manipulation. This part-level guidance reduces uncertainty compared to intent-based or task-level guidance, facilitat- ing better convergence of the grasp generation network. Specifically, we present a two-stage method that includes a text-guided diffusion model TextGraspDiff to first gen- erate a coarse grasp pose, then apply a hand-object con- tact optimization process to ensure both plausibility and diversity. Unlike all-finger optimization approaches that prioritize maximum object-finger contact [5]\u2014resulting in mainly closed-finger grasps\u2014our optimization emphasizes optimizing contact between the fingers and the object part specified by text description. This strategy ensures physical realism, diversity in grasps, and alignment with text. Furthermore, the template representation of text prompts for the object grasping parts also supports grasp synthesis guided by task-level and personalized text prompts, since LLM [3] has been able to divide the task descriptions into several execution steps, including the parts of the object that should be grasped. Subsequently, our Text2Grasp can generate task-level grasps taking the inference results of LLM as input. Moreover, the utilization of LLM allows for the expansion of our designed text templates, enriching the training dataset for personalized text descriptions. In summary, our contributions are as follows: \u2022 We propose Text2Grasp, a grasp synthesis method guided by text prompts of object grasping parts, offering a more natural interaction and precise grasp control. \u2022 We introduce a two-stage method that includes a text- guided diffusion model TextGraspDiff to first generate a coarse grasp pose, then apply a hand-object contact opti- mization process to ensure both plausibility and diversity. \u2022 By leveraging LLM, our method facilitates grasp syn- thesis guided by task-level and personalized text descrip- tions without additional manual annotations. Extensive experiments on public datasets demonstrate that our method achieves not only accurate part-level grasp control but also comparable performance to state-of-the-art methods in terms of grasp quality.", + "main_content": "There has been a significant amount of research in the field of grasp synthesis. Here, we focus on realistic human grasp synthesis and review the most relevant works. Based on whether the grasp generation is controllable, we categorize the synthesis algorithms into two types: Uncontrolled Grasp Synthesis and Controllable Grasp Synthesis. Uncontrolled Grasp Synthesis. Uncontrolled grasp synthesis primarily aims to generate hand pose capable of stably grasp objects without considering subsequent tasks. A trend has emerged to develop deep learning solutions, driven by the introduction of large-scale datasets of handobject interactions [2,5,8,9,14,32,41]. These methods learn the latent distribution of hand-object contact information or hand parameters through generative models, including Generative Adversarial Network (GAN) [7] and Conditional Variational Auto-Encoder (CVAE) [30]. GanHand [5] initially predicts the optimal grasp type from a taxonomy of 33 classes, and then employs a discriminator and an optimization to get a refined grasp. Instead of predicting MANO [29] parameters directly, ContactDB [1] use thermal cameras to capture object contact maps that reflect the contact regions of an object post-grasping and utilizes GAN to learn their distribution, facilitating grasp synthesis. Comparing with GAN [7], CVAE [30] are more popular in hand grasp synthesis because of its simple structure and one-step sampling procedure. GrabNet [32] utilizes CVAE by conditioning on the Basis Point Set [25] of objects and samples from the low-dimensional space mapped through CVAE to generate hand grasps. Additionally, it incorporates a neural network to refine the coarse pose. This approach is also followed by Oakink [41] and AffordPose [14]. Grasp Field [17] and HALO [16] learn an implicit grasping field using CVAE as the hand representation to produce highfidelity hand surface. GraspTTA [15] exploits the contact map introduced by ContactDB [1] to refine the grasps generated by CVAE during reference. Contact2Grasp [19] learns the distribution of contact map for grasps by CVAE and then maps the contact to grasps. Moreover, ContactGen [20] introduces a three-component model to represent the contact of hand-object: the contact probability, the specific hand part making contact, and the orientation of the touch, and a sequential VAE is proposed to learn these aspects for grasp synthesis. Despite its simplicity and direct sampling process, CVAE often suffer from the posterior collapse [13,37,43]. This leads to less diverse outputs, including simplistic samples like a slightly closed hand shape. To mitigate this problem, SceneDiffuser [13], UGG [21] and DexDiffuser [38] employ a diffusion-based denoising process, ensuring diverse sample generation by gradually denoising, thus avoiding direct latent space mapping. These aforementioned methods are capable of generating stable grasps. However, these grasps might not be consistent with human manipulation habits, making them less appropriate for the tasks. Consequently, instead of solely relying on object shape as input, we incorporate the text prompts of object grasping parts into diffusion model for controllable grasp synthesis. Moreover, in contrast to methods that utilize global optimization [5, 32] to refine grasps, our work introduces an optimization based on finger perception and object part perception. This strategy not only ensures grasp stability but also maintains diversity. Controllable Grasp Synthesis. The capacity for controllable grasping is crucial as it represents the first step for manipulation. To facilitate controllable grasp synthesis, datasets [2, 14, 32, 41] encompass a range of human intentions for dexterous hand grasping. ContactPose [2] identifies two basic intentions: use and pass. Expanding on this, GrabNet [32] introduces lifting and off-hand passing. OakInk [41] goes further by incorporating intentions such as holding and receiving. AffordPose [14] elaborates on the use intention, creating hand-centric categories like twisting, pulling, handle grasping, among eight total intentions. Additionally, to generate intent-driven grasps, OakInk [41] and AffordPose [14] translate these intentions into word embeddings, combining them with object point cloud features as the condition of CVAE to produce matching grasp pose. Considering that language is one of the most natural forms of human interaction, some studies employ task-level text descriptions as inputs for predicting grasps with parallel jaw gripper. These methods initially construct extensive datasets of grasps that include task-level text descriptions. Based on these datasets, [31] and [33] adopt a generatethen-select methodology. It involves initially generating a number of poses for parallel jaw gripper, followed by a selection process guided by task-level text descriptions. In contrast, [34] and [23] directly predict the position of gripper on the input RGB image or object point cloud based on task-level text description guidance. Comparing to the simple closing of a gripper, the human hand, with its higher degree of freedom, must not only ensure stable grasping but also maintain the rationality of itself and interaction, making grasp synthesis for it more challenging. These methods, utilizing fixed set of intentions or tasklevel text descriptions for grasping, inherently face ambiguity, especially when defining intentions or tasks for identical parts of an object, such as a mug\u2019s handle and body. To address this, we develop a grasp synthesis method guided by text prompts of object grasping parts. Compared to the ambiguity of intentions or task-level guidance, partlevel guidance offers lower uncertainty, which facilitates the convergence of grasp synthesis networks. Furthermore, our method contrasts with those requiring manual labeling [23,31,33,34], by leveraging Large Language Model [3] to facilitate grasp synthesis guided by task-level and personalized text descriptions without additional manual labels. 3. Methods Our aim is to achieve controllable grasp synthesis when given an object\u2019s point cloud and a text prompt of object grasping part, ensuring the generated hand grasps stably hold the object while aligning with the input text. To this end, we introduce a two-stage method that includes a textguided diffusion model TextGraspDiff to first generate a coarse grasp pose, then apply a hand-object contact optimization process to ensure both plausibility and diversity. The overview of our method is illustrated in Fig. 2. In this section, we first present our semi-automated text generation method in 3.1. We then detail the text-guided conditional diffusion model-TextGraspDiff in 3.2, and the hand-object contact optimization in Section 3.3. 3.1. Semi-automatic Text Generation for Grasp The key idea behind Text2Grasp is to leverage text prompts of object grasping parts to control grasp synthesis. Instead of relying on extensive manual annotations, which are extremely labor-intensive and time-consuming, we design a semi-automatic approach to generate text prompts for existing hand grasp dataset, as illustrated in Fig. 2. First, we predefined the text template, i.e., Grasp the [Object Part] of the [Object Category]. The object category can be directly provided by existing datasets, while the object part corresponding to each grasp can be determined through computation. Specifically, given the point cloud of an object and the hand mesh grasping it, we first calculate the contact between the object and the hand, assigning a contact label to each point on the object. And the \u201cObject Part\u201d label for each grasp is determined by the object part with the most contact points. Finally, we can generate a text template for each grasp in the datasets. Furthermore, we leverage Large Language Model [3] with strong text comprehension capabilities to expand the template text, thereby generating more personalized text descriptions. For example, given the prompt \u201cPlease write [N] sentences with the same meanings as [template].\u201d, where N is the number of generated text descriptions, LLM can then infer a variety of plausible text descriptions to form our candidate text list L. During training, we randomly select one description from L as a training label for each grasp. This semi-automatic text generation approach facilitates personalized text inputs, thereby enhancing the flexibility of grasp synthesis control. In addition, the representation of the text prompts for the object grasping parts gives our method the ability to achieve task-level grasp synthesis because Large Language Model [3] can provide a description of the grasping action from a task description, such as grasping the mug\u2019s handle for drinking task. Thus we can accomplish task-level grasp synthesis without extra training. 3.2. Text to Grasp via Diffusion Model In this section, we introduce TextGraspDiff, a conditional diffusion model for grasp synthesis that is guided by Text \u201cGrasp the handle of the mug.\u201d Clip PointNet++ Diffuse Process \ud835\udc54\ud835\udc61 Time Embedding \u2026 ResBlock Attention Conv \ud835\udc540 \ud835\udc61 Multi-Modal Attention \ud835\udc540 \ud835\udc54\ud835\udc61\u22121 Q K V \u2026 Denoising Network G\uf071 Object Part Perception Finger Perception Text TextSegNet TextGraspDiff Optimization 1 1 1 0 0 Finger Vector \ud835\udc5c Pose Shape HO_distance Finger Vector Grasp Vector steps \ud835\udc47 \ud835\udc59 Semi-automatic Text Generation Contact Points \u201cGrasp the Part of the Object.\u201d Part Category Object Category Template Text LLM Prompt \u201cMore descriptions.\u201d Personalized Text \"1. Take hold of the mug's handle.\" \"2. Hold on the mug's handle firmly.\" \"3. Grip the handle of the mug.\" \u201c4. \u2026...\u201d \ud835\udc3f Figure 2. The Overview of Text2Grasp. We present a semi-automatic approach to generate both the template text and the personalized text prompts for each grasp in the datasets, which are used to train TexGraspDiff. And given the point cloud of object and text description of object grasping parts, we introduce a two-stage method that includes a text-guided diffusion model TextGraspDiff to first generate a coarse grasp pose, then apply a hand-object contact optimization process to ensure both plausibility and diversity. The final hand mesh can be obtained by MANO model [29]. text prompts of object grasping part. The overview of our method is illustrated in Fig. 2. Taking the object point cloud o \u2208RN\u00d73 and the part-level text prompts l, TextGraspDiff outputs a hand grasp vector g \u2208R66. This grasp vector encompasses MANO [29] model pose g\u03b8 \u2208R48, shape g\u03b2 \u2208R10, the distance gdis \u2208R3 between object and hand centroids, and the finger vector gf \u2208R5, which indicates which fingers are being used for grasping. Adhering to the diffusion model outlined in [10], our method is comprised of both a forward process and a reverse process. Forward process. Given a grasp vector g0, sampling from the ground-truth data distribution, we add the infinitesimal Gaussian noise \u03f5t \u223cN(0, \u03b2tI) into g0 and get a sequence of noised data {gi}T t=1 after T step. \u03b2t adheres to a linear variance schedule. q(gt | g0) = N(gt; \u221a\u00af \u03b1tg0, (1 \u2212\u00af \u03b1t)I) (1) where \u03b1t = 1 \u2212\u03b2t, \u03b1t = Qt s=1 \u03b1s. After T steps, if the amount of noise added is sufficiently large, then gT approximately converges to a standard Gaussian distribution. Reverse process. The process reverses noise sampled from a Gaussian distribution back into a sample from the data distribution for a fixed timestep. In our work, with the grasp vector g0 as the target for denoising process, and object point cloud o and its text prompt l of object grasping part as conditions, the conditional diffusion model leads to p(gt\u22121|gt, o, l). Following [27, 35], we predict the grasp vector g0 using a neural network G\u03b8. This process can be formalized as: p \u0000gt\u22121| gt, o, l \u0001 = N \u0012 gt\u22121; \u00b5\u03b8(gt, o, l, t), e \u03b2tI \u0013 (2) \u02dc \u00b5\u03b8(gt, o, l) = \u221a\u00af \u03b1t\u22121\u03b2t 1 \u2212\u03b1t G\u03b8(gt, o, l, t) + \u221a\u03b1t(1 \u2212\u03b1t\u22121) 1 \u2212\u03b1t gt (3) The detailed structure of the denoising network G\u03b8 is shown in Fig. 2, we employ a Transformer [36] as the denoising network\u2019s backbone, which has demonstrated promising results in human motion synthesis [27, 35] and robotic hand grasp synthesis [13, 21]. For multi-condition inputs, including point clouds and text, we initially employ the PointNet++ [26] and the pretrained CLIP [28] model as respective encoders to extract the point cloud feature and text feature. Instead of simply adding these multi-modal features, we design a Multi-Modal Attention Module based on Transformer [36] for effective fusion, leveraging point cloud feature fp as the query and text feature fl as the key and the value. This fusion mechanism enables us to achieve more precise control over grasp locations. Following [13], we incorporate a timestep-residual block and cross-attention for input noise embedding feature-condition fusion to ensure the network is effectively guided by step t and the condition c. Finally, the grasp vector g0 can be predicted by final output layer. The loss function of the network G\u03b8 is: L = Eg0\u223cq(g0|o,l),t\u223c[1,T ][\u2225g0 \u2212G\u03b8(gt, t, o, l)\u22252 2] (4) After completing the training of the denoising network G\u03b8 , when given a new object\u2019s point cloud and text description as conditions, we first sample random noise from a Gaussian distribution, then apply the denoising network G\u03b8 and Eq. (2) and Eq. (3) over T steps, and finally we obtain the grasp vector matching the object\u2019s part-level text description. The grasp hand mesh is then generated by applying the final grasp vector to the MANO model [29]. 3.3. Text-guided Contact Optimization To produce physically more plausible grasps, many works [5, 9, 15, 19] introduce a refinement stage to enhance contact and minimize penetration. Their main focus is on stable grasping by aligning hand with the closest object surface points, but these points may not match the text-described object parts, potentially leading to inaccurate grasp locations. Therefore, we propose a text-guided contact optimization method based on finger perception and text-guided object part perception. It guides specific fingers toward the object part described by text description, further enhancing grasp stability, diversity, and grasp part accuracy. Hand finger perception. Rather than minimizing the distance between all prior hand contact vertices often utilized for grasping and object points, we specifically optimize the distance between the object and the particular fingers used for grasping. We utilize a five-dimensional finger vector to define which fingers are used in grasping the object. For instance, as shown in Fig. 3, if the grasp involves using the thumb, index, and middle finger, then the finger vector gf is [1,1,1,0,0]. Following human habits, this vector is generated alongside the grasp. During optimization, we minimize the distance only between the object and those fingers indicated by a 1 in the finger vector, avoiding the issue of all fingers contacting the object. The loss for the finger perception optimization is formulated as: Hc = 5 [ i=1 {Ci | gi f = 1} (5) Lhc(Hc, O) = 1 | Hc | X h\u2208Hc min k \u2225h, Ok\u2225 (6) Optimization w/ finger perception Optimization w/ object part perception Figure 3. The Contact optimization. The contact optimization consists of finger perception and object part perception. The finger perception optimization directs the particular fingers used for grasping towards object and the object part optimization guided fingers toward the object part specified by text. where Ci represents the set of hand vertices which belongs to the ith fingertips, statistics by [9]. And Hc denotes the set of points for all fingers making contact. Text-guided object part perception. As shown in Fig. 3 we minimize the distance between hand contact points and object part specified by the text input to guide hand fingers to grasp the correct object part. Specifically, using a pretrained text-guided segmentation network TextSegNet, we first segment the input object point cloud into targeted Oc and non-targeted parts Onc based on the input text prompts of object grasping part. During the optimization, we assign higher weights to the targeted part, directing the hand contact points toward it to enhance the accuracy of the grasp part. The hand-object contact loss is formulated as follows: Lc(Hc, O) = \u03bb1Lnc(Hc, Oc) + \u03bb2Lnc(Hc, Onc) (7) where \u03bb1 and \u03bb2 are hyperparameters. Oc = {pi \u2208 O| Fseg(pi, l) = 1} and Onc = {pi \u2208O| Fgcd(pi, l) = 0} respectively represent the targeted and non-targeted parts. Fseg is the pretrained text-guided segmentation network TextSegNet. We also use PointNet++ [26] and CLIP [28] for point cloud and text encoding, followed by a multi-layer fully connected network to output segmentation labels. The training loss utilizes negative log-likelihood loss. Others. Following [5,9], we minimize the object points that are inside the hand distance to their closest hand surface points to penalize hand and object interpenetration by Lptr. Furthermore, following [44], we incorporate a joint angle limitation loss Langle and a self-collision loss Lself for the hand to ensure the plausibility of the grasping hand pose. Ultimately, our overall optimization objective is formulated as follows: min g\u03b8,g\u03b2,gdis \u03bbcLc+\u03bbptrLptr+\u03bbangleLangle+\u03bbselfLself (8) where \u03bbc , \u03bbptr , \u03bbangle and \u03bbself is a hyper-parameter. We utilize this objective function to optimize the networkpredicted MANO model pose g\u03b8 , shape g\u03b2, the distance gdis between object and hand centroids, aiming to further enhance the quality of generated grasps and the part accuracy of grasp . 4. Experiments In this section, we demonstrate the performance of our proposed PLAN-Grasp. We first introduce our implementation details in Sec. 4.1, followed by the used datasets and evaluation metrics in Sec. 4.2 and Sec. 4.3, respectively. In Sec. 4.4, we compare our method with the state-of-the-art methods and various applications that we can support. Finally, in Sec. 4.5, we conduct ablation studies to verify the effectiveness of components we design. 4.1. Implementation Details We conduct all the experiments using a single NVIDIA GeForce RTX4090 GPU with 24G memory. We sample N=2048 points sampling from the object surface as the input object points. During the training, we use the Adam optimizer [18] with the learning rate of 1e-4 to train the denoising network LAN-GraspDiff for 1000 epochs. The training batch size is 64. Following Scenediffuser [13], we set the diffusion step T to 100 , which is enough for a single 3D hand pose. During the refinement stage, we utilize Adamax [18] to optimize the grasp vector, applying different learning rates for its components: 1e-2 for hand pose, 1e-5 for hand shape, and 1e-4 for the distance between hand and object centroids, across a total of 200 epochs. 4.2. Dataset OakInk. The OakInk [41] is a large-scale dataset that captures hand-object interactions oriented around 5 intents: use, hold, lift-up, hand-out, and receive. It provides 1800 object models of 32 categories with their part labels and interacting hand poses. We use the shape-based subset OakShape to conduct experiments, 1308 objects for training and 183 objects for evaluation. AffordPose. The AffordPose [14] is a large dataset of hand-object interactions with 8 affordance-driven labels such as twist, lift, and press. It comprises 641 objects from 13 categories in PartNet [22] and PartNet-Mobility [40]. To evaluate the generalization ability of our method, we select 6 object categories identical to OakInk [41]: bottle, disperser, earphone, knife, mug and scissors, and randomly chose 30 instances from each category for testing. 4.3. Metrics A superior text-guided grasp should not only securely hold the object but also grasp the correct object part specified by text prompts. In this work, we adopt 4 metrics in total cover both grasp quality and grasp part accuracy. Penetration. Following [19, 41, 42], we compute the Penetration Depth and the Solid Intersection Volume between hand and object to measure the hand-object penetration. The PD is the maximum distance of all the penetrated hand vertices to their closet object surface, and the SIV is calculated by summing the volume of object voxels that are inside the hand surface. Simulation Displacement. Following [9, 20, 41], we place the object and the predicted hand into simulator [6], and calculate the displacement of the object center over a period of time by applying gravity to the object. Diversity. Following [16, 20], we measure the diversity by clustering generated grasps into 20 clusters using K-means, and calculate the entropy of the cluster assignments and the average cluster size. Grasp Part Accuracy. Employing the approach introduced in Sec. 3.2, we assign text template to each generated grasp and determine their accuracy by comparing with the input text descriptions. Grasp Part Accuracy is defined as the ratio of correctly identified grasps to the overall number of the generated grasps. 4.4. Comparison with the State-Of-The-Arts To evaluate the controllability of grasp synthesis in our method, we utilize two class-level public datasets, OakInk [41] and AffordPose [14], comprising multiple categories and instances within each category. Instance-level datasets like Grab [32] and HO3D [8] are unsuitable for evaluating our method because they contain only one instance per category, and only achieve controllability on a single instance cannot verify our generalization ability. For a fair comparison, we compare the state-of-the-art method trained on OakInk [41]: GrabNet [32], which is used for grasp generation in newest work [14, 41]. We train it on the OakInk dataset using its officially released code and test it and our method on 183 unseen objects from the OakInk dataset and 180 out-of-domain objects from the AffordPose dataset. Following [16,20], we generate 20 hand grasps for each test object. Specifically for our methods, we randomly create 20 text prompts based on the parts of each test object, using these prompts and the objects as inputs to produce grasps. We first present the quantitative comparison results on the in-domain OakInk [41] dataset and the out-of-domain AffordPose [14] dataset as shown in Tab. 1. It can be seen that our method achieves the lower penetration and simulation displacement on the OakInk dataset indicating the higher grasp quality than GrabNet [32]. Besides, our results are close to and even outperform the ground truth in diversity that demonstrate our method achieves more diverse and natural grasps. Experimental results on AffordPose [14] Dataset demonstrate that our method achieves the Dataset Methods Penetration Simulation Displacement Mean \u00b1 Var\u2193 Diversity Part Accuracy\u2191 Depth\u2193 Volume\u2193 Entropy\u2191 Cluster Size\u2191 OakInk TestGT 0.11 0.65 1.80 \u00b1 2.04 2.91 4.11 100.00 GrabNet [32] 0.48 2.97 2.84 \u00b1 2.81 2.95 2.57 Ourstemplate 0.40 1.89 2.49 \u00b1 2.51 2.92 4.70 87.76 Ourspersonalized 0.41 1.73 2.49 \u00b1 2.57 2.92 4.74 82.32 AffordPose GrabNet [32] 0.54 3.77 3.09 \u00b1 2.74 2.94 2.52 Ourstemplate 0.66 5.05 2.93 \u00b1 2.67 2.90 4.88 78.53 Ourspersonalized 0.59 3.84 3.00 \u00b1 2.86 2.87 4.79 73.83 Table 1. The quantitative results on the OakInk [41] dataset and the AffordPose [14] dataset.TestGT means the grouth-truth grasps on the OakInk Test datasets. Ourstemplate and Ourspersonalized refer to the grasps generated when using template and personalized text description inputs, respectively. \u2191denotes higher values are better, \u2193denotes lower values are better. Ours GrabNet GrabNet Ours Figure 4. The qualitative results on the OakInk [41] dataset and the AffordPose [14] dataset. The results demonstrated above the dotted line are from OakInk [41] dataset, while below are from AffordPose [14] dataset. comparable generalization ability, with the lower simulation displacement, higher diversity, and comparable penetration. More importantly, we achieve the grasp controllability by text prompts of object grasping parts, a capability not present in GrabNet [32] and get grasp part accuracy of 87.76% with template text and 82.32% with personalized text on OakInk dataset [41] as shown in Tab. 1. Furthermore, to evaluate the performance qualitatively, we visualize the generated hand grasps for both in-domain and out-of-domain objects by GrabNet [32] and our method. As shown in Fig. 4, it can be seen that both methods can generate plausible hand grasps for object part with sample shapes such as the body of mug and the cap of earphones. But for specific-part grasp such as the cap of bottle, we can observe clearly from the red boxes that the grasps our method generated has the smaller penetration and more natural contact. These results demonstrate our method\u2019s capability to generate physically plausible and stable grasp. Moreover, we visualize the multi-grasps for each object in Fig. 5. In contrast to the predominantly closed hand poses with five fingers generated by GrabNet [32], ours is better suitable for the specific shape of the object, as demonstrated Fig. 5 with the eyeglasses and knife. To show our grasp controllability, we visualize the results of grasp synthesis guided by text prompts of object grasping parts. which consist of template and personalized Ours GrabNet Input eyeglasses headphones scissors bottle knife Figure 5. The qualitative results of the diverse grasps on the objects. For each object, we visualize five grasps and the red shape represents abnormal grasps. Grasp the cap of the cylinder bottle. Grasp the bridge of the eyeglasses. Grasp the trigger of the trigger sprayer. Grasp the head of the pen. Grasp the handle of the mug. Grasp the handle of the screwdriver. Grasp the handle of the fryingpan. Grasp the headband of the headphones. Grasp the lotion pump around its head. Clasp the bottle's cylindrical body firmly. Hold the knife\u2018s cutting edge. Pinch the cap of the bottle. Hold onto the eyeglasses' frame. Seize the panel of the game controller. Take hold of the wineglass's stem. Wrap your fingers around the knife's handle. Figure 6. Visualization of the grasps generated when using different types of text inputs. The top row displays grasps produced from template text inputs, while the bottom row exhibits those generated from personalized text inputs. text descriptions. As shown in Fig. 6, our method not only generates hand poses that grasp the objects of different categories in a manner consistent with human habits but also directly produces grasps in a text-controlled manner. This level of controllability enables precise object part grasping for subsequent tasks. 4.5. Ablation Study VAE vs Diffusion. To fairly evaluate the effectiveness of the diffusion model we employed, we construct a variant of our method by replace the diffusion model with VAE for part-level grasp sythesis. Both this model and ours remove the optimization process. The results on the OakInk [41] dataset are shown in the first two rows of Tab. 2 and Fig. 7. From the experimental results we can see diffusion model achieves the higher grasp quality with lower penetration and higher grasp part accuracy. More importantly, it can be seen obviously from Fig. 7 that our diffusion model can generate more diverse hand pose for mug and bottle than VAE. Multi-Modal Attention. We evaluate the effectiveness of the Multi-Modal Attention which is designed to fuse the text feature and the object point feature. Specifically, we compare this module with feature addition. As shown in the 3rd and the 4th of Tab. 2, the multi-modal attention outperforms feature addition across all metrics, especially in the grasp part accuracy. Optimiziation. We evaluate the effectiveness of optimization based on finger perception and text-guided object part perception, and the results are shown in Tab. 2 and Fig. 8. Note that here we use Baseline (Base.) to represent Methods Penetration Simulation Displacement Mean \u00b1 Var\u2193 Diversity Part Accuracy\u2191 Depth\u2193 Volume\u2193 Entropy\u2191 Cluster Size\u2191 Base.(VAE) 0.55 8.44 2.48\u00b12.56 2.90 3.15 77.38 Base. (Diffusion) 0.38 2.82 3.00\u00b13.04 2.93 3.46 85.25 Base. w/ Add fusion 0.38 2.89 3.09\u00b12.92 2.90 3.45 83.44 Base. w/ Attention fusion 0.38 2.82 3.00\u00b13.04 2.93 3.46 85.25 Base. (Diffusion) 0.38 2.82 3.00\u00b13.04 2.93 3.46 85.25 Base. + opt. w/ global 0.39 1.82 2.39\u00b12.34 2.95 4.18 83.69 Base. + opt. w/ finger perception 0.38 1.79 2.50\u00b12.56 2.95 4.41 83.74 Base. + opt. w/ finger perception and object part perception 0.40 1.89 2.49\u00b12.51 2.92 4.70 87.76 Base. + opt. w/ refinenet [32] 0.30 1.38 3.11\u00b12.81 2.87 2.86 83.55 Table 2. The quantitative results of ablation study on the OakInk [41] dataset. \u2191denotes higher values are better, \u2193denotes lower values are better. Base. (VAE) Base. (Diffusion) Figure 7. Comparisons of ours based on VAE(Base. (VAE)) and diffusion model (Base. (Diffusion)). Baseline Opt. w/o finger perception Opt. w/ finger perception Baseline Opt. w/o part perception Opt. w/ Refinenet[] Opt. w/ Refinenet[] Opt. w/ part perception Figure 8. Comparisons of different optimization strategies. Figure 9. Visualization of the grasps generated from the text input \u2019grasp the handle of the faucet\u2019 given an unseen category faucet. our method without any optimization. It can be seen the grasp quality achieves a significant improvement by adding the global optimization which leads all-fingers toward object for Baseline. Specifically, the penetration volume and the simulation displacement have decreased by 35.46%, and 20.33% respectively, while diversity has improving by 20.81%. However, it directs all fingers towards the object\u2019s nearest point, limiting the diversity and leading to inaccuracies in the contact part, as shown in Fig. 8. In contrast, our optimization, grounded in finger perception, fine-tunes only the fingers involved in grasping, while the rest maintain a natural state. This approach enables us to maintain grasp quality while achieving greater diversity. Furthermore, as shown in Fig. 8, the optimization based on text-guided object part perception focuses on directly the hand towards the part described by text description, enabling us to achieve a higher grasp part accuracy. In addition, we compare our optimization with the RefineNet used by GrabNet [32]. It is trained to denoise on the dataset built by adding random noise into ground-truth hand-object interaction. As the Fig. 8 illustrated, grasps optimized by RefineNet still do not fully contact the blade. In contrast, our method, optimized for specific situations, performs better in detail. And the quantitative results in Tab. 2 demonstrate our methods achieve better balance between penetration and simulation displacement. 5. Discussion The part control ability of our method can be easily transferred among the objects in the seen categories. However, due to the limited categories in the training dataset, when faced with the objects of new categories that have never been seen in the training dataset, although reasonable grasp can be generated, the contact parts can not be identified because of the lack of understanding of the new categories. As shown in Fig. 9, our method can produce a reasonable grasp for the faucet but cannot accurately grasp the faucet\u2019s handle. Therefore, it is necessary to train on a grasp dataset with more categories, but such a dataset is currently not available. In addition, with the help of the Large Language Models [3], we can achieve task-level static grasp synthesis, such as grasp the handle of the knife rather than the blade when cutting fruit. However, correctly grasping the part of an object is the first step to complete the task and the ability to dynamically manipulate objects is also the key for task. We will do more exploration in the future. 6. Conclusion In this work, we introduce Text2Grasp, a grasp synthesis method guided by text prompts of object grasping parts. It begins with a text-guided diffusion model, termed TextGraspDiff, which is responsible for generating an initial, coarse grasp pose. This is subsequently refined through a hand-object contact optimization process. This method ensures that the generated grasps are not only physically plausible and diverse but also precisely aimed at specific object parts described by text prompts. Furthermore, our method also supports grasp synthesis guided by personalized text and task-level text descriptions by LLM without extra manual annotations. Extensive experiments conducted on two public datasets demonstrate our methods achieves not only the comparable performance in grasp quality but also precise part-level grasp control." + }, + { + "url": "http://arxiv.org/abs/2404.12361v1", + "title": "Learning the Domain Specific Inverse NUFFT for Accelerated Spiral MRI using Diffusion Models", + "abstract": "Deep learning methods for accelerated MRI achieve state-of-the-art results\nbut largely ignore additional speedups possible with noncartesian sampling\ntrajectories. To address this gap, we created a generative diffusion\nmodel-based reconstruction algorithm for multi-coil highly undersampled spiral\nMRI. This model uses conditioning during training as well as frequency-based\nguidance to ensure consistency between images and measurements. Evaluated on\nretrospective data, we show high quality (structural similarity > 0.87) in\nreconstructed images with ultrafast scan times (0.02 seconds for a 2D image).\nWe use this algorithm to identify a set of optimal variable-density spiral\ntrajectories and show large improvements in image quality compared to\nconventional reconstruction using the non-uniform fast Fourier transform. By\ncombining efficient spiral sampling trajectories, multicoil imaging, and deep\nlearning reconstruction, these methods could enable the extremely high\nacceleration factors needed for real-time 3D imaging.", + "authors": "Trevor J. Chan, Chamith S. Rajapakse", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "physics.med-ph" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Despite the numerous advantages of MRI, inherent physical and hardware constraints cap acquisition speed and lead to long scan times. This creates myriad downstream obstacles, including low patient compliance, inefficient resource allo- cation, and image motion artifacts, among others. For this reason, methods to accelerate MRI acquisition have been and continue to be an active area of research. Acceleration can be achieved through a combination of using more efficient scan- ning sequences and reducing the number of measurements made in the frequency space of the image, k-space. Considering the former, successful techniques for faster scanning include radial and spiral imaging methods, which exploit the unequal distribution of information across k-space by sampling lower frequencies more densely [1, 2]. Considering the latter, acquiring fewer measurments in k- space rewards a decrease in scan time proportional the the ratio of undersampling, but comes at the cost of image qual- ity. Sampling below the Nyquist limit introduces ambiguities during reconstruction which manifest as artifacts in the final image. Reconstruction algorithms must therefore leverage ad- ditional information, including multicoil data in the case of parallel imaging, and sparse priors, in the case of compressed sensing, in order to resolve these ambiguities [3, 4]. A third more recent approach to undersampled MRI re- construction lies in deep learning methods, which essentially learn a set of image priors and use these to regularize so- lutions to the ill-posed reconstruction problem [5]. Within this category, diffusion models stand out for producing state- of-the art results on image reconstruction tasks for faster scanning, motion correction, noise reduction, and others [6, 7, 8, 9]. Despite this, the vast majority of deep learn- ing approaches, and to our knowledge, all diffusion-based approaches to image reconstruction, focus on Cartesian- sampled MRI, missing out on potential acceleration gains attained by using more efficient non-Cartesian sampling tra- jectories.", + "main_content": "Contributions: \u2022 Creation of a novel multi-conditioning strategy for solving the inverse non-uniform fast Fourier transform (nufft) using a learned conditional score function and weak frequency guidance during sampling \u2022 Efficient hyperparameter search of the joint trajectoryreconstruction space and identification of optimal sampling trajectories \u2022 Retrospective acquisition and reconstruction of a 2D, 256x256 pixel, 22x22 cm2 image with a readout duration of 0.02 seconds. 2. BACKGROUND Canonical score-based models consider the mapping between a known distribution of independently and identically distributed samples of gaussian noise and an observed, but unknown distribution of data p(x). These distributions bookend a Langevin diffusion process described by the following arXiv:2404.12361v1 [cs.AI] 18 Apr 2024 Fig. 1. Example trajectories (A) and the corresponding readout gradients in kx and ky (B). All trajectories shown cover the frequency space of a 256x256 image and have a readout duration of 10.0 ms. stochastic differential equation representing the trajectory of a sample from our data distribution into a sample from our noise distribution: dx = f(x, t)dt + g(t)dw. (1) Here, functions f(\u00b7, t) and g(\u00b7) are the drift and diffusion coefficiets of x(t) respectively, and w is a standard Wiener process, or Brownian motion. In order to generate a novel sample from our data distribution, we can generate a random noise vector and attempt to solve the reverse-time SDE, but this is generally intractable. However, we can approximate this process by estimating the noise-conditioned score function, \u2207x log pt(x), which computes the likelihood of a sample x existing between the noise and image distributions. With this, the reverse-time SDE becomes: dx = \u0002 f(x, t) \u2212g(t)2\u2207x log pt(x) \u0003 dt + g(t)d\u00af w (2) The score function can be trained using a score matching with Langevin dynamics algorithm [10, 11]. 3. METHODS This research study was conducted retrospectively using human subject data made available in open access by [12]. Ethical approval was not required. 3.1. Data We use the NYU FastMRI dataset [12] consisting of 6970 fully sampled 2D brain scans on hardware ranging from 4 to 24 coils. For training and testing, we consider axial T2 weighted turbo spin echo sequences characterized by the following sequence parameters: scan time=140 s, TR=6 s, TE=113 ms, slices=30, slice thickness=5 mm, field of view=22 cm, matrix size=320x320. Effective scan time for a 2D slice at 2562 resolution is 140s/320 \u2217256/30 \u22483.7s. As this data is initially acquired using Cartesian sequences, Fig. 2. Given measurements y0, reconstruction follows a modified diffusion sampling process. At each timestep, a noisy latent xt is concatenated with a prior p0 and passed to the denoising model to obtain \u02dc xt\u22121. To enforce consistency with y0, we compute a frequency gradient \u2207yt\u22121 and solve for the image gradient using a modified iterative inverse nufft (section 3.3). A weighted sum of xt\u22121 and \u2207xt\u22121 yields the corrected image xt\u22121. This is repeated until t = 0. we simulated spiral acquisition by retrospectively interpolating in k-space to attain complex-valued measurements along generated spiral trajectories. 3.2. Generating spiral trajectories Following Kim et al. [13], we consider spiral trajectories of the form k(\u03c4) = Z \u03c4 0 1 \u03c1(\u03d5)d\u03d5ej\u03c9\u03c4 \u2248 \u03bb\u03c4 \u03b1ej\u03c9\u03c4. (3) Here, \u03c1 denotes sampling density, \u03c4 is a function of time, \u03d5 is angular position, \u03c9 = 2\u03c0n is frequency, with n the number of turns in k-space, \u03bb a scaling factor equal to matrix size/(2\u2217 FOV ), and \u03b1 is a bias term for oversampling the center of kspace relative to the edges. Solving this parametric equation under the constraints of capped gradient slew rate and capped gradient amplitude yields gradients (gx(t) and gy(t)) as well as a spiral trajectory in the kx,ky-plane (figure 1). In doing so, we can tune sampling parameters to control for factors such as read out duration and dwell time, while varying the number of interleaves and ratio of low-to-high frequency oversampling. 3.3. Image reconstruction is inverse problem solving MRI undersampled acquisition amounts to measuring an unknown signal x through some imperfect sampling function A: y = Ax + \u03f5. Here, y is the measured multicoil k-space data, and A is the non-uniform fourier transform. \u03f5 is measurement noise and exists in the same domain as the y; in MRI, noise is gaussian-distributed across the real and imaginary components of y for each coil. Reconstruction is an ill-posed inverse problem of recovering an image signal x from a set of incomplete k-space measurements y. As x and y exist in different domains, x is hidden behind a sampling operator A. Solving this problem necessitates prior knowledge. In our case, we learn an underlying conditional distribution of images and seek to reconstruct Fig. 3. Representative reconstruction results for a single 2D 16 coil image. Retrospective k-space data was sampled with an optimized 23 interleave sequence with a total readout duration of 0.02 s. Rows 1 and 2 show the RSS-reconstructed images and log-scaled k-space magnitudes for the ground truth, inverse nufft, and proposed model reconstructions. Below are the individual coil magnitude and phase images for the fully sampled image, the inverse nufft reconstructions, and the model predictions. samples from this distribution consistent with the measurements. Information is supplied in two forms: first, we learn a conditional score function \u2207x log pt(xt| \u02dc A\u22121y0), where y0 is the measurement in frequency space and \u02dc A\u22121 is an approximate inverse of A, in our case the inverse nufft solved iteratively using conjugate gradients. We find that adding this supervision during training helps to constrain the model when faced with a large number of input image channels and the periodic ambiguity inherent when operating on complex numbers. Second, we use frequency space gradients to weakly guide the sampling process. At each time step during sampling, we compute the forward nufft of an uncorrected \u02dc xt\u22121 and take a difference between that and the measurements y0. To minimize this difference, the approximate gradient in image space is calculated by solving a modified approximate inverse nufft \u02dc At \u22121, which corrects for low frequency biases and applies time step-dependent noising determined by the noise schedule \u03c3(t), which is necessary for sampling with langevin diffusion. \u02dc At \u22121xt = \u02dc A\u22121xt c1e\u2212c2r2 + N(0, \u03c3(t)2) (4) Following [11], we choose a linear noise schedule and observe that the underlying ordinary differential equation describing transit from latent to image is locally linear, so summation of \u02dc xt\u22121 and \u2207xt\u22121 to obtain a frequency-corrected image xt\u22121 is akin to gradient descent (figure 2). Fig. 4. We performed a grid hyperparameter search over a 2D trajectory space. We fixed readout duration at 0.02 seconds and varied the number of interleaves from 1 to 125 and alpha from 1 to 4. Based on structural similarity of the model-reconstructed images, we found multiple trajectories that yield improved image quality. In comparison, the naive Archimedean spiral, corresponding to 1 interleave and \u03b1 = 1, performs very poorly. In practice, due to the non-invertibility of the nufft, imperfections in the approximate inverse nufft bleed into the final image reconstruction, introducing artifacts and reducing quality. To avoid this, we anneal the guidance signal following an empirically chosen linear schedule \u03b3(t) = \u03b2(1 \u2212t), ensuring strong guidance at the outset of sampling and minimal artifacts at the end of sampling. A consequence of this choice is that we do not strongly enforce that Ax0 = y0 at time 0. 4. RESULTS Model reconstruction performance was evaluated on a heldout test dataset. Test trajectories have a fixed readout duration of 0.02 seconds, in which time the measurements needed to reconstruct a 256x256 pixel, 22x22 cm2 2D image are acquired. Reconstructed image quality was scored using structural similarity (SSIM) (figure 3). To investigate the effect choosing different scanning trajectories has on the quality of reconstructed images, we also perform a grid hyperparameter search of spiral trajectories with a fixed readout duration of 0.02 seconds (figure 4) and varying \u03b1 and interleaves. Surprisingly, the common \u2019naive\u2019 trajectory, a single interleave Archimedean spiFig. 5. (A) Effect of sampling trajectory optimization, model reconstruction without frequency guidance, and model reconstruction with frequency guidance. For the non-optimized trajectory, we used a single interleave Archimedean spiral with a readout duration of 0.02 s. The optimized trajectory uses a 23 interleave, \u03b1 = 1.23 sequence with an identical readout duration. (B) Snapshots of the image latent xt and the gradient signal \u2207xt taken during a diffusion sampling process. ral, corresponding to \u03b1 = 1, performs very poorly when sampled below the Nyquist limit. Trajectories which perform better tended to lie along two logarithmic curves roughly characterized by \u03b1 = 1.33 log(0.39 interleaves) and \u03b1 = 0.87 log(0.54 interleaves). Finally, we conduct an ablation study to disentangle the effects of optimal sampling trajectory without model reconstruction, model reconstruction without frequency guidance, and model reconstruction with frequency guidance (figure 5). We find that all three contribute to noticeable increases in image quality, both visual, and quantitative based on SSIM. The combination of choosing an optimal trajectory, performing model reconstruction with conditioning, and using annealed frequency guidance results in large improvements in image quality, up to and exceeding a 0.15 boost in SSIM. 5. DISCUSSION While initial results are promising, the main limitation of this project is the reliance on retrospective, Cartesian-sampled data. Implementing the sequences outlined in this work will likely require customizing spiral sequences so as to match the contrast and signal of the original Cartesian sequences, which will constrain the space of realizable trajectories. Until a dataset of raw non-Cartesian MRI data becomes available, this will continue to be an obstacle. For a similar reason, it is difficult to make head-to-head comparisons between the original sequence and the proposed sequences without prospective validation. For this reason, the immediate task is to acquire prospective data with custom sequences and use it to validate image reconstruction. Currently, a concrete direct comparison is between the proposed sequences and their Nyquist-sampled counterparts, which run roughly 3x longer. Apart from the short-term task of matching sequence parameters between spiral and Cartesian sequences, the choice of spiral sequence leaves a considerable amount of flexibility even within the space of optimized interleave and density pairs identified above. Variation in number of interleaves, and by extension the duration of a single interleave, allows for tailoring of sequence contrast, signal, and speed to task-specific requirements. An area in which these sequences could provide additional benefit, even outside of sheer acceleration, would be in imaging tissues with a very short T2*, as acceleration within an interleave allows for proportionally more data acquisition to occur before signal has decayed. 6. CONCLUSION Here we introduce a new method and show preliminary results for reconstructing spiral MRI using a diffusion model. Combining multicoil imaging, spiral scanning, and undersampling enables dramatically faster imaging speeds. Applications of this work are widespread; in addition to the numerous typical benefits associated with faster scanning, including better patient compliance and fewer motion artifacts, these methods have the potential to reach the extremely high acceleration factors necessary to achieve high resolution real-time 3D imaging. 7. ACKNOWLEDGMENTS No funding was received for conducting this study. The authors have no relevant financial or non-financial interests to disclose. 8." + }, + { + "url": "http://arxiv.org/abs/2404.09732v1", + "title": "Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models", + "abstract": "Though diffusion models have been successfully applied to various image\nrestoration (IR) tasks, their performance is sensitive to the choice of\ntraining datasets. Typically, diffusion models trained in specific datasets\nfail to recover images that have out-of-distribution degradations. To address\nthis problem, this work leverages a capable vision-language model and a\nsynthetic degradation pipeline to learn image restoration in the wild (wild\nIR). More specifically, all low-quality images are simulated with a synthetic\ndegradation pipeline that contains multiple common degradations such as blur,\nresize, noise, and JPEG compression. Then we introduce robust training for a\ndegradation-aware CLIP model to extract enriched image content features to\nassist high-quality image restoration. Our base diffusion model is the image\nrestoration SDE (IR-SDE). Built upon it, we further present a posterior\nsampling strategy for fast noise-free image generation. We evaluate our model\non both synthetic and real-world degradation datasets. Moreover, experiments on\nthe unified image restoration task illustrate that the proposed posterior\nsampling improves image generation quality for various degradations.", + "authors": "Ziwei Luo, Fredrik K. Gustafsson, Zheng Zhao, Jens Sj\u00f6lund, Thomas B. Sch\u00f6n", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion models have proven effective for high-quality (HQ) image generation in various image restoration (IR) tasks such as image denoising [7, 14, 30], deblurring [6, 43, 55], deraining [37, 59], dehazing [30, 31], inpaint- ing [28, 44, 47], super-resolution [18, 48, 60], shadow re- moval [11, 31], etc. Compared to traditional deep learning- based approaches that directly learn IR models using an \u21131 or \u21132 loss [5, 23, 61, 62] or an adversarial loss [16, 52, 53], diffusion models are known for their ability to generate photo-realistic images with a stable training process. How- arXiv:2404.09732v1 [cs.CV] 15 Apr 2024 ever, they are mostly trained on fixed datasets and there- fore typically fail to recover high-quality outputs when ap- plied to real-world scenarios with unknown, complex, out- of-distribution degradations [53]. Although this problem can be alleviated by leveraging large-scale pretrained Stable Diffusion [39, 44] weights [25, 51, 57] and synthetic low-quality (LQ) image generation pipelines [46, 53], it is still challenging to accurately restore real-world images in the wild (i.e., wild IR). On the one hand, Stable Diffusion uses an adversarially trained varia- tional autoencoder (VAE) to compress the diffusion to la- tent space, which is efficient but loses image details in the reconstruction process. Moreover, in practice, the restora- tion in latent space is unstable and tends to generate color- shifted images [25]. On the other hand, most existing works use a fixed degradation pipeline (with different probabilities for each degradation) to generate low-quality images [53], which might be insufficient to represent the complex real- world degradations. In this work, we aim to perform photo-realistic im- age restoration with enriched vision-language features that are extracted from a degradation-aware CLIP model (DA- CLIP [32]). For scenes encountered in the wild, we as- sume the image only contains mild, common degradations such as light noise and blur, which can be difficult to rep- resent by text descriptions. We thus add a fidelity loss to reduce the distance between the LQ and HQ image embed- dings. Then the enhanced LQ embedding is incorporated into the image restoration networks (such as the U-Net [45] in IR-SDE [30]) via cross-attention. Inspired by Real- ESRGAN [53], we also propose a new degradation pipeline with a random shuffle strategy to improve the generaliza- tion. An optimal posterior sampling strategy is further pro- posed for IR-SDE to improve its performance. Fig. 1 shows the comparison of our method with other state-of-the-art wild IR approaches. In summary, our main contributions are as follows: \u2022 We present a new synthetic image generation pipeline that employs a random shuffle strategy to simulate complex real-world LQ images. \u2022 For degradations in the wild, we modify DACLIP to reduce the embedding distance of LQ-HQ pairs, which enhances LQ features with high-quality information. \u2022 We propose a posterior sampling strategy for IR- SDE [30] and show that it is the optimal reverse-time path, yielding a better image restoration performance. \u2022 Extensive experiments on wild IR and other specific IR tasks demonstrate the effectiveness of each compo- nent of our method.", + "main_content": "Blind Image Restoration Image restoration (IR) aims to reconstruct a high-quality (HQ) image from its corrupted counterpart, i.e. from a low-quality (LQ) image with taskspecific degradations [9, 20, 22, 61\u201364, 69]. Most learningbased approaches directly train neural networks with an \u21131/\u21132 loss on HQ-LQ image pairs, which is effective but often overfit on specific degradations [53, 57, 65]. Thus the blind IR approach is proposed and has gained growing attention in addressing complex real-world degradations. BSRGAN [65] is the pioneering work that designs a practical degradation model for blind super-resolution, and Real-ESRGAN [53] improves it by exploiting a \u2018highorder\u2019 degradation pipeline. Most subsequent blind IR methods [4, 25] follow their degradation settings but with some improvements in architectures and loss functions. Recently, some works [17, 32, 40] further propose to jointly learn different IR tasks using a single model to improve the task generalization, so-called unified image restoration. Photo-Realistic Image Restoration Starting from ESRGAN [16], photo-realistic IR becomes prevalent due to the increasing requirement for high-quality image generation. Early research explored a variety of methods that combine GANs [10, 34] and other perceptual losses [8, 13, 67] to train networks to predict images following the natural image distribution [16, 52, 53]. However, GAN-based approaches often suffer from unstable performance and can be challenging to train on small datasets. Recent works therefore introduce diffusion models in image restoration for realistic image generation [14, 18, 30, 31, 37, 48]. Moreover, leveraging pretrained Stable Diffusion (SD) models [39, 44] as the prior is growing popular in real-world and blind IR tasks [25, 51, 56, 57]. In particular, StableSR [51] and DiffBIR [25] adapt the SD model to image restoration using an approach similar to ControlNet [66]. CoSeR [50], SeeSR [56], and SUPIR [57] further introduce the textual semantic guidance in diffusion models for more accurate restoration performance. 3. Method Our work is a set of extensions and improvements on the degradation-aware CLIP [32] which, in turn, builds on a mean-reverting SDE [30]. Thus, before going into our contributions in the following sections, we first summarize the main constructions of the mean-reverting SDE and degradation-aware CLIP. 3.1. Preliminaries Mean-Reverting SDE Given a random variable x0 sampled from an unknown distribution, x0 \u223cp0(x), the meanreverting SDE [30] is defined according to d x = \\ thet a _t \\, (\\mu x) d t + \\sigma _t d w, \\label {equ:ou} (1) are predefined time-dependent coefficients \\ where \u03b8t and \u03c3t are predefined time-dependent coefficients and w is a standard Wiener process. By restricting the coefHQ Image Blur \u2022 Nearest \u2022 Bicubic \u2022 Bilinear \u2022 Gaussian \u2022 Poisson \u2022 Quality (0.6-0.95) First Degradation Shuffle Second Shuffle Third Shuffle Wiener Deconvolutoin Degradation Pipeline Resize Noise JPEG \u2022 Gaussian \u2022 Defocus \u2022 Box/motion \u2022 Sinc Blur Noise Blur JPEG Resize LQ Image Figure 2. Overview of the proposed pipeline for synthetic image degradation. There are three degradation phases adopting the random shuffle strategy. We use different types of filters in blur generation and add the Wiener deconvolution for simulating ringing artifacts similar to the Sinc filter in Real-ESRGAN [53]. As a general \u00d71 image restoration pipeline, we use one \u2018resize\u2019 operation to provide image resolution augmentation, and another resize operation to ensure that all the degraded images are resized back to their original size. ficients to satisfy \u03c32 t / \u03b8t = 2 \u03bb2 for all timesteps t, we can solve the marginal distribution pt(x) as follows: \\be g i n { s pli t} p _t ( x ) & \\ = \\m ath ca l { N } \\ big l (x _t \\cond m_t, v_{t}\\bigr ),\\\\ m_t &= \\mu + (x_0 \\mu ) \\, \\expp ^{-\\bar {\\theta }_{t}}, \\\\ v_{t} & = \\lambda ^2 \\, \\Bigl (1 \\expp ^{-2 \\, \\bar {\\theta }_{t}}\\Bigr ), \\end {split}\\label {eq:sde_solution} (2) where \u00af \u03b8t = R t 0 \u03b8z dz. To simulate the image degradation process, we set the HQ image as the initial state x0 and the LQ image as the mean \u00b5. Then the forward SDE iteratively transforms the HQ image into the LQ image with additional noise, where the noise level is fixed to \u03bb. Moreover, Anderson [2] states that the forward process (Eq. (1)) has a reverse-time representation as d x = \\ b ig [ \\ th eta _t \\ , ( \\ mu x) \\sigma _t^2 \\, \\nabla _{x} \\log p_t(x) \\big ] d t + \\sigma _t d \\hat {w}, \\label {eq:reverse-irsde} (3) where \u2207x log pt(x) is the score function, which can be computed via Eq. (2) during training since we have access to the ground truth LQ-HQ pairs in the training dataset. Following IR-SDE [30], we train the score prediction network with a maximum likelihood loss which specifies the optimal reverse path x\u2217 t\u22121 for all times: \\be g i n {s p lit} { x}_ { t1}^ { *} & = \\ f r a c { 1 \\ exp p ^ {-2 \\, \\ba r { \\ theta }_{t-1}}}{1 \\expp ^{-2 \\, \\bar {\\theta }_t}} \\expp ^{-\\theta _t^{'}} ({x}_t \\mu ) \\\\[.6em] &\\quad + \\frac {1 \\expp ^{-2 \\, \\theta _t^{'}}}{1 \\expp ^{-2 \\, \\bar {\\theta }_t}} \\expp ^{-\\bar {\\theta }_{t-1}} (x_0 \\mu ) + \\mu , \\end {split}\\label {eq:mll_loss} (4) where \u03b8 \u2032 i = R i i\u22121 \u03b8tdt. The proof can be found in [30]. Once trained, we can simulate the backward SDE (Eq. (3)) to restore the HQ image, similar to what is done in other diffusion-based models [49]. Degradation-Aware CLIP The core component of the degradation-aware CLIP (DACLIP [32]) is a controller that explicitly classifies degradation types and, more importantly, adapts the fixed CLIP image encoder [42] to output high-quality content embeddings from corrupted inputs for accurate multi-task image restoration. DACLIP uses a contrastive loss to optimize the controller. Moreover, the training dataset is constructed with image-caption-degradation pairs where all captions are obtained using BLIP [19] on the clean HQ images of a multi-task dataset. The trained DACLIP model is then applied to downstream networks to facilitate image restoration. Specifically, the cross-attention [44] mechanism is introduced to incorporate image content embeddings to learn semantic guidance from the pre-trained DACLIP. For the unified image restoration task, the predicted degradation embedding is useful and can be combined with visual prompt learning [71] modules to further improve the performance. 3.2. Synthetic Image Degradation Pipeline To restore clean images from unknown and complex degradations, we use a synthetic image degradation pipeline for LQ image generation, as shown in Fig. 2. Common degradation models like blur, resize, noise, and JPEG compression are repeatedly involved to simulate complex scenarios. Following the high-order degradation in RealESRGAN [53], all degradation models in our pipeline have individual parameters that are randomly picked in each training step, which substantially improves the generalization for out-of-distribution datasets [29, 53, 65]. In particular, in the blur model, we add some specific filter types (e.g., defocus, box, and motion filters) rather than only Gaussian filters for more general degradations, and the Wiener deconvolution is included to simulate natural ringing artifacts (which usually occurs in the preprocessing steps in some electronic cameras [15, 58]). Wiener deconvolution generates more distinct ringing artifacts on textures than the Sinc filter [53], which can be seen in the two examples of applying Wiener deconvolution to blurry images in Fig. 3. For \u00d71 image restoration (no resolution changes), we use two resize operations (with different interpolation modes) to provide random resolution augmentation and ensuring that all degraded images then are resized back to their original size. Note that our model focuses on image restoration in the wild (wild IR) and we therefore set all degradations to be light and diverse. Moreover, we randomly shuffle the degradation orders to further improve the generalization. HQ Input Blur Degradation Wiener Deconvolution Sinc Filter Figure 3. Examples of applying Wiener deconvolution to generate ringing artifacts. Compared to the Sinc filter used in RealESRGAN [53], the proposed Wiener deconvolution generates more distinct ringing artifacts on textures. 3.3. Robust Degradation-Aware CLIP As introduced in Sec. 3.1, DACLIP leverages a large-scale pretrained vision-language model, namely CLIP, for multitask image restoration. While it works well on some (relatively) large and distinct degradation types such as rain, snow, shadow, inpainting, etc., it fares worse on the wild IR task since most degradations are mild, hard to describe in text, and contain multiple degradations in the same image. To address this problem, we update DACLIP to learn more robust embeddings with the following aspects: 1) In dataset construction, instead of only using one degradation for each image, we use different combinations of degradation types such as \u2018an image with blur, noise, ringing artifacts\u2019 as the degradation text. 2) We add an \u21131 loss to minimize the embedding distance between LQ and HQ images, where the HQ image embedding is extracted from the frozen CLIP image encoder. An overview of the robust DACLIP is illustrated in Fig. 4. The multi-degradation texts enable DACLIP to handle images that contain multiple complex degradations in the wild. Moreover, the additional \u21131 loss forces DACLIP to learn accurate clean embeddings wild degradation cars in a street in front of a bridge copy blur, noise, resize, jpeg control LQ Image \ud835\udc52! \" Pretrained CLIP Contrastive Loss Image Controller Text Encoder \ud835\udc52# \" \ud835\udc52! $% \ud835\udc52# $% caption degradation \ud835\udc52! &% \ud835\udc3f' Loss caption Image Encoder Figure 4. The proposed robust degradation-aware CLIP (DACLIP) model. eT c and eT d are caption and degradation text embeddings, respectively. The embeddings (eLQ c , eLQ d ) are extracted from LQ images, and eHQ c represents the HQ image embedding extracted from the original CLIP image encoder. from our synthetic corrupted inputs. As Luo et al. [32] illustrates, the quality of the image content embedding significantly affects the restoration results, thus encouraging us to extend the DACLIP base encoders to larger models for better performance. Specifically, we first generate clean captions using HQ images and then train the ViT-L-14 (rather than ViT-B-32 in DACLIP) based on the synthetic image-caption-degradation pairs, where the LQ images are generated following the pipeline in Fig. 2. The dimensions of both image and text embeddings have increased from 512 to 768, which introduces more details for downstream IR models. We use IR-SDE [30] for realistic image generation and insert the image content embedding into the U-Net via cross-attention [44], analogously to what was done in [32]. Since the degradation level is difficult to describe using text (e.g., the blurry level, noise level, and quality compression rate), we thus abandon the use of degradation embeddings for wild image restoration in both training and testing, similar to the single task setting in DACLIP [32]. In addition, to enable large-size inputs, we simply modify the network with an additional downsampling layer and an upsampling layer before and after the U-Net for model efficiency. 3.4. Optimal Posterior Sampling for IR-SDE It is worth noting that the forward SDE in Eq. (1) requires many timesteps to converge to a relatively stable state, i.e. a noisy LQ image with noise level \u03bb. The sampling process (HQ image generation) uses the same timesteps as the forward SDE and is also sensitive to the noise scheduler [36]. To improve the sample efficiency, Zhang et al. [68] propose a posterior sampling approach by specifying the optimal mean and variance in the reverse process. However, their method sets the SDE mean \u00b5 to 0, and only uses it to generate actions as a typical diffusion policy in reinforcement learning applications. In this work, we extend their posterior sampling strategy into a more general form for IR-SDE. Let us use the same notation as in Sec. 3.1. Formally, given the initial state x0 and any other diffusion state xt at time t \u2208[1, T], we can prove that the posterior of the meanreverting SDE is tractable when conditioned on x0. More specifically, this posterior distribution is given by p(x_ { t-1 } \\ c ond x_ t , x_0) = \\m a thcal {N}(x_{t-1} \\cond \\tilde {\\mu }_t(x_t, \\, x_0), \\; \\tilde {\\beta }_t I), \\label {eq:posterior} (5) which is a Gaussian with mean and variance given by: \\begi n { s p l it} \\til d e {\\ m u }_t ( x_t, x_ 0 ) &= \\ f r a c {1 \\ex pp ^{-2 \\, \\bar {\\theta }_{t-1}}}{1 \\expp ^{-2 \\, \\bar {\\theta }_t}} \\expp ^{-\\theta _t^{'}} ({x}_t \\mu ) \\\\[.6em] &\\quad + \\frac {1 \\expp ^{-2 \\, \\theta _t^{'}}}{1 \\expp ^{-2 \\, \\bar {\\theta }_t}} \\expp ^{-\\bar {\\theta }_{t-1}} (x_0 \\mu ) + \\mu , \\end {split}\\label {eq:posterior_mu_var} (6) \\ m at h rm {and } \\quad \\til d e { \\ beta } _t = \\frac {(1 {\\expp ^{-2\\bar {\\theta }_{t-1}}})(1 {\\expp ^{-2\\theta ^{'}_t}})}{1 \\expp ^{-2\\bar {\\theta }_{t}}}. \\quad \\quad (7) Note that the posterior mean \u02dc \u00b5t(xt, x0) has exactly the same form as the optimal reverse path x\u2217 t\u22121 in Eq. (4), meaning that sampling from this posterior distribution is also optimal for recovering the initial state, i.e. the HQ image. In addition, combining the reparameterization trick (xt = mt + \u221avt \u03f5t) with the noise prediction network \u02dc \u03f5\u03d5(xt, \u00b5, t) gives us a simple way to estimate x0 at time t: \\ h a t {x }_ 0 = \\ex pp ^{\\ ba r { \\ theta }_{t}} \\big (x_t \\mu \\sqrt {v_t} \\tilde {\\epsilon }_{\\phi }(x_t, \\mu , t) \\big ) + \\mu , \\label {eq:est_x0} (8) where mt and vt are the forward mean and variance in Eq. (2), and \u03d5 is the learnable parameters. Then we iteratively sample reverse states based on this posterior distribution starting from noisy LQ images for efficient restoration. 4. Experiments We provide evaluations on different image restoration tasks to illustrate the effectiveness of the proposed method. Implementation Details For all experiments, we use the AdamW [27] optimizer with \u03b21 = 0.9 and \u03b22 = 0.99. The initial learning rate is set to 2 \u00d7 10\u22124 and decayed to 1e-6 by the Cosine scheduler for 500 000 iterations. The noise level is fixed to 50 and the number of diffusion denoising steps is set to 100 for all tasks. We set the batch size to 16 and the training patches to 256 \u00d7 256 pixels. All models are implemented with PyTorch [38] and trained on a single A100 GPU for about 3-4 days. 4.1. Evaluation of IR in the Wild Datasets and Metrics We train our model on the LSDIR dataset [21] which contains 84 991 high-quality images with rich textures and their downsampled versions. In training, we only utilize the collected HQ images and synthetically generate all HQ-LQ image pairs following the proposed degradation pipeline in Fig. 2. In testing, we evaluate our model on two external datasets: DIV2K [1] and RealSR \u00d72 [3]. Specifically, the DIV2K dataset contains 100 2K resolution image pairs with all LQ images generated using our degradation pipeline, while the RealSR \u00d72 dataset contains 30 high-resolution real-world captured image pairs. In both datasets, we upscale all LQ images to have the same size as the corresponding HQ images for \u00d71 image restoration. For wild IR, we pay more attention to the visual quality of restored images and thus prefer to compare perceptual metrics such as LPIPS [67], DISTS [8], FID [12], and NIQE [35]. Note that NIQE is a non-reference metric that only evaluates the quality of the output. In addition, we also report distortion metrics like PSNR and SSIM since we also want the prediction to be consistent with the input. Comparison Approaches We compare our method DACLIP-IR with other state-of-the-art photo-realistic wild image restoration approaches: Real-ESRGAN [53], StableSR [51], SeeSR [56], and SUPIR [57]. All these comparison methods use the same degradation pipeline as that in Real-ESRGAN. Moreover, StableSR, SeeSR, and SUPIR employ pretrained Stable Diffusion models [39, 44] as diffusion priors for better generalization on out-of-distribution images. SeeSR and SUPIR further leverage powerful vision-language models (RAM [70] and LLaVA [26], respectively) to provide additional textual prompt guidance for image restoration in the wild. Results The quantitative results on the DIV2K and RealSR \u00d72 datasets are summarized in Table 1 and Table 2, respectively. It is observed that DACLIP-IR achieves the best performance over all approaches on the two datasets. The results are quite expected for the DIV2k dataset since we use the same degradation pipeline in both training and testing. For the RealSR \u00d72 images, their degradations are unseen for all approaches and our DACLIP-IR still outperforms other methods on most metrics. Moreover, one can observe that changing the degradation pipeline directly decreases the performance on both datasets. And it is worth noting our SDE model is trained from scratch while all other diffusion-based approaches (StableSR, SeeSR, and SUPIR) leverage pretrained Stable Diffusion models as priors, demonstrating the effectiveness of the proposed method and our new degradation pipeline. A visual comparison of the proposed method with other LQ Image StableSR SeeSR SUPIR Ours Figure 5. Visual comparison of the proposed model with other state-of-the-art photo-realistic image restoration approaches on our synthetic DIV2K [1] dataset. Our method trains the diffusion model from scratch while other approaches leverage pretrained Stable Diffusion models. Note that all methods using Stable Diffusion are prone to generate unrecognizable text, such as for the white shirt in the second row. LQ Image StableSR SeeSR SUPIR Ours Figure 6. Visual comparison of the proposed model with other state-of-the-art photo-realistic image restoration approaches on the RealSR \u00d72 [3] dataset. Our method trains the diffusion model from scratch while other approaches leverage pretrained Stable Diffusion models. Table 1. Quantitative comparison between the proposed method with other real-world image restoration approaches on our synthetic DIV2K [1] test set. \u2018\u2020\u2019 means that our model is trained with the Real-ESRGAN [53] degradation pipeline. Method Distortion Perceptual PSNR\u2191 SSIM\u2191 LPIPS\u2193 DISTS\u2193 FID\u2193 NIQE\u2193 Real-ESRGAN [53] 27.71 0.810 0.200 0.107 27.32 4.41 StableSR [51] 26.04 0.759 0.241 0.123 34.74 4.11 SeeSR [56] 26.29 0.721 0.223 0.114 27.94 3.56 SUPIR [57] 26.81 0.741 0.194 0.099 21.73 3.52 DACLIP-IR\u2020 27.56 0.796 0.195 0.113 24.32 3.43 DACLIP-IR (Ours) 29.93 0.837 0.153 0.085 15.94 3.24 Table 2. Quantitative comparison between the proposed method with other real-world IR approaches on the RealSR \u00d72 [3] test set. All inputs are pre-upsampled with scale factor 2. \u2018\u2020\u2019 means our model trained with the Real-ESRGAN [53] degradation pipeline. Method Distortion Perceptual PSNR\u2191 SSIM\u2191 LPIPS\u2193 DISTS\u2193 FID\u2193 NIQE\u2193 Real-ESRGAN [53] 28.03 0.855 0.151 0.117 47.65 4.84 StableSR [51] 27.55 0.838 0.169 0.112 54.87 5.45 SeeSR [56] 28.38 0.815 0.212 0.139 40.85 4.20 SUPIR [57] 29.32 0.826 0.175 0.122 31.75 4.61 DACLIP-IR\u2020 28.92 0.858 0.184 0.138 33.76 4.19 DACLIP-IR (Ours) 30.65 0.878 0.148 0.113 30.09 4.31 state-of-the-art photo-realistic IR approaches on the two datasets is illustrated in Fig. 5 and Fig. 6. One can see that all these methods can restore visually high-quality images. Moreover, results produced by SeeSR and SUPIR seem to have more details than StableSR, indicating the importance of textual guidance in diffusion-based image restoration. But in terms of distortion metrics that measure the consistency w.r.t the input, we found that pretrained Stable Diffusion models might introduce unclear priors and thus tend to generate text stroke adhesion which is unrecognizable, for example on the back of the white shirt in the second-row of Fig. 5. And in some cases, the SUPIR further produces fake textures and block artifacts, as shown in the third-row of Fig. 5 (the yellow window frames) and the third-row of Fig. 6 (the weird block around \u20185\u2019). Although our method trains the diffusion model from scratch, its results still look realistic and are consistent with the inputs. Results on the NTIRE Challenge We also evaluate our model on the NTIRE 2024 \u2018Restore Any Image Model (RAIM) in the Wild\u2019 challenge [24], as shown in Table 3. To generalize to the challenge dataset, we first train our model on LSDIR [21] with synthetic image pairs, and then finetune it on a mixed dataset that contains both synthetic and real-world images from LSDIR [21] and RealSR [3]. Note that we use the same model for both phase two and phase three of the challenge, but employ the original reverse-time SDE sampling in phase three for better visual results (small Table 3. Final results of the NTIRE 2024 RAIM challenge. Team Phase 2 Phase 3 Final Score Rank MiAlgo 79.13 57 91.65 1 Xhs-IAG 81.96 47 82.07 2 So Elegant 79.69 46 80.09 3 IIP IR 80.03 14 45.94 4 DACLIP-IR 78.65 9 40.03 5 TongJi-IPOE 72.99 11 39.91 6 ImagePhoneix 78.93 4 34.79 7 HIT-IIL 69.80 1 27.92 8 Web Face Image DACLIP DACLIP-robust DACLIP-robust* Figure 7. Inpainting results on a web-downloaded face image. noise makes the photo look more realistic). 4.2. Effectiveness of the Posterior Sampling This section adopts the same settings as the DACLIP [32] and focuses on unified image restoration (UIR) which trains and evaluates a single model on multiple IR tasks. Robust DACLIP Model Notice that the original DACLIP is sensitive to input degradations since it is trained on specific datasets without data augmentation. To address this issue, we follow the synthetic training idea from wild image restoration and propose a robust DACLIP model. Similar to the original DACLIP, this robust model is trained on 10 datasets for unified image restoration. However, we now also add mild degradations such as noise, resize, and JPEG compression (first part of the degradation pipeline in Fig. 2) to the LQ images for data augmentation. The resulting model can then better handle real-world inputs that contain minor corruptions. Fig. 7 shows a face inpainting comparison for a web-downloaded image example. As one can see, the original DACLIP model completely fails to inpaint this image. On the other hand, the robust DACLIP model restores the face well, illustrating its robustness. Evaluation and Analysis To analyze the effectiveness of the proposed posterior sampling, we choose 3 (out of 10) tasks for evaluation: raindrop removal on the RainDrop [41] dataset, low-light enhancement on the LOL [54] dataset, and color image denoising on the CBSD68 [33] dataset. The comparison methods include recent all-inone image restoration approaches: AirNet [17], PromptIR [40], IR-SDE [30], and the original DACLIP [32]. Finally, the posterior sampling is applied to our robust DACLIP model. The comparison results are reported in Table 4. The PromptIR performs better on distortion metrics Table 4. Comparison of different methods on the unified image restoration task. \u2018robust\u2019 means we add mild synthetic degradations (e.g., resize, noise, and JPEG) to LQ images in training as a data augmentation strategy for out-of-distribution data generalization. \u2018*\u2019 means the method uses the proposed optimal posterior sampling approach for image generation. Here we report the results on the RainDrop [41], LOL [54], and CBSD68 [33] datasets for raindrop removal, low-light enhancement, and denoising task evaluation, respectively. Method RainDrop [41] LOL [54] CBSD68 [33] PSNR\u2191 SSIM\u2191 LPIPS\u2193 FID\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 FID\u2193 PSNR\u2191 SSIM\u2191 LPIPS\u2193 FID\u2193 AirNet [17] 30.68 0.926 0.095 52.71 14.24 0.781 0.321 154.2 27.51 0.769 0.264 93.89 PromptIR [40] 31.35 0.931 0.078 44.48 23.14 0.829 0.140 67.15 27.56 0.774 0.230 84.51 IR-SDE [30] 28.49 0.822 0.113 50.22 16.07 0.719 0.185 66.42 24.82 0.640 0.232 79.38 DACLIP [32] 30.81 0.882 0.068 38.91 22.09 0.796 0.114 52.23 24.36 0.579 0.272 64.71 DACLIP-robust 30.82 0.869 0.078 27.96 22.05 0.782 0.136 51.01 23.90 0.543 0.310 74.83 DACLIP-robust* 31.68 0.921 0.051 21.92 22.78 0.848 0.092 41.50 25.86 0.723 0.167 62.12 LQ Image PromptIR DACLIP GT Ours Figure 8. Visual comparison of the proposed posterior sampling for the DACLIP model on the unified IR task. (PSNR and SSIM) while diffusion-based approaches have better perceptual performance (LPIPS and FID). Although the robust DACLIP model involves more degradations in training, it still performs similarly to its original version. By using the proposed posterior sampling in inference, the performance of the robust DACLIP model is significantly improved across all metrics and tasks. Especially for the denoising task, posterior sampling leads to the best LPIPS and FID performance, proving its effectiveness. 5. Conclusion This paper addresses the problem of photo-realistic image restoration in the wild. Specifically, we present a new degradation pipeline to generate low-quality images for synthetic data training. This pipeline includes diverse degradations (e.g., different blur kernels) and a random shuffle strategy to increase the generalization. Moreover, we improve the degradation-aware CLIP by adding multiple degradations to the same image and minimizing the embedding distance between LQ-HQ image pairs to enhance the LQ image embedding. Subsequently, we present a posterior sampling approach for IR-SDE, which significantly improves the performance of unified image restoration. Finally, we evaluate our model on various tasks and the NTIRE RAIM challenge and the results demonstrate that the proposed method is effective for image restoration in the wild. Acknowledgements This research was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, by the project Deep Probabilistic Regression \u2013 New Models and Learning Algorithms (contract number: 2021-04301) funded by the Swedish Research Council, and by the Kjell & M\u00a8 arta Beijer Foundation. The computations were enabled by the Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre." + }, + { + "url": "http://arxiv.org/abs/2404.15447v1", + "title": "GLoD: Composing Global Contexts and Local Details in Image Generation", + "abstract": "Diffusion models have demonstrated their capability to synthesize\nhigh-quality and diverse images from textual prompts. However, simultaneous\ncontrol over both global contexts (e.g., object layouts and interactions) and\nlocal details (e.g., colors and emotions) still remains a significant\nchallenge. The models often fail to understand complex descriptions involving\nmultiple objects and reflect specified visual attributes to wrong targets or\nignore them. This paper presents Global-Local Diffusion (\\textit{GLoD}), a\nnovel framework which allows simultaneous control over the global contexts and\nthe local details in text-to-image generation without requiring training or\nfine-tuning. It assigns multiple global and local prompts to corresponding\nlayers and composes their noises to guide a denoising process using pre-trained\ndiffusion models. Our framework enables complex global-local compositions,\nconditioning objects in the global prompt with the local prompts while\npreserving other unspecified identities. Our quantitative and qualitative\nevaluations demonstrate that GLoD effectively generates complex images that\nadhere to both user-provided object interactions and object details.", + "authors": "Moyuru Yamada", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Text-to-image generative models have emerged recently and demonstrated their amazing capabilities in synthesizing high- quality and diverse images from text prompts. Diffusion models [Dhariwal and Nichol, 2021; Ho et al., 2020; Nichol and Dhariwal, 2021] are currently one of the state- of-the-art methods and widely used for the image generation. Despite their impressive advances in image generation, lack of control over the generated images is a crucial limitation in deploying them to real-world applications. To provide further controllability over diffusion models, re- searchers have put a lot of effort into control of object layouts, object interactions, and composition of objects. Training-free layout control [Chen et al., 2023] takes a text prompt along with the object layout as an input and control the object po- sition based on a loss between the input layout and attention maps. MultiDiffusion [Bar-Tal et al., 2023] places an ob- ject with specified details on a certain region using segmen- tation masks and a prompt for each segment. These methods work without requiring any additional training; however, they struggle to control both the global contexts (e.g., object inter- actions) and the local details (e.g., object colors and emo- tions) simultaneously. With a complex prompt containing multiple objects, the models often misinterpret specified local details, directing them to the wrong target or ignoring them, similar to the issues observed in Stable Diffusion [Rombach et al., 2022]. While splitting the complex prompt into mul- tiple prompts allows the model to depict each object more accurately, handling the prompts independently poses limita- tions in addressing a global context that describes interactions and relationships between the multiple objects. Another trial is training a model from scratch or fine- tuning a given diffusion model for better controllable gen- eration with a task-specific annotation [Li et al., 2023b; Avrahami et al., 2023b; Xue et al., 2023; Zhang et al., 2023]. Some researchers have leveraged a scene graph as an input and introduced models which generate images from the scene graph to control the object interactions in the generated im- ages [Farshad et al., 2023] and [Yang et al., 2022]. These methods have shown superior results, while they often re- quire a high computation cost and a long development pe- riod. This makes it difficult to leverage various pre-trained diffusion models. This paper proposes Global-Local Diffusion (GLoD), a novel diffusion framework that controls both global contexts and local details simultaneously using a pre-traind diffusion model without requiring any additional training or finetun- ing. GLoD takes as input global prompts that describe entire image including object interactions, and local prompts that specify object details along with their position in the form of a bounding box. The diffusion model predicts noises from the prompts separetely and then the noises are composed to guide the denoising process. For example, instead of giving a complex prompt such as \u2019a man with white beard is talking with a smiling woman wearing a necklace\u2019, we decompose it into multiple prompts, a global context \u2019a man is talking with a woman\u2019 and two local details \u2019a man with white beard\u2019 and \u2019a woman is wearing a necklace and smiling\u2019. The details of the man and the woman in the global prompt are guided with each local prompt in the image generation process. GLoD arXiv:2404.15447v1 [cs.CV] 23 Apr 2024 a man with white beard a woman is wearing a necklace and smiling a man is talking with a woman a bear is in a barrel a bear is eating an apple water is leaking from a barrel a girl is holding a cake a cake with strawberries on top and surrounded by cream a Husky is playing with a cat a Husky wearing glasses a cat is short-hair and black 27 Figure 1: Global-Local Diffusion (GLoD) takes multiple prompts as an input (e.g., a global prompt: \u2019a man is talking with a woman\u2019 and two local prompts: \u2019a man with white beard\u2019 and \u2019a woman is wearing a necklace and smiling\u2019) along with their layout and assigns noises obtained from them into corresponding layers with a pre-trained diffusion model. Then, the noises are effectively composed to generate an image. Details of objects in the global prompt are guided with the corresponding local prompts. also controls the object layout with the given bounding boxes. The generated examples are as shown in Fig. 1. Our framework enables both global-global compositions and global-local compositions. The global-global composi- tion composes foreground and background similar to the ex- isting method [Liu et al., 2022], while the global-local com- position composes the global context and the object details. Since a local prompt can be a global prompt for other local prompts, GLoD allows us to compose more than two layers. In addition, unlike the existing methods that may changes ob- ject identities even by just adding a single attribute, GLoD only changes the object details specified by the corresponding local prompts while preserving other identities. This feature enables users to control the generated image interactively. In order to assess the effectiveness of our proposed method quantitatively, we build a new test set to evaluate the control- lability over the global contexts and local details in the image generation rather than the image quality. Our key contributions are summarized as follows1: \u2022 We propose Global-Local Diffusion (GLoD), a simple and yet effective framework for diffusion-based image synthesis and editing which enables controlling both global contexts and local details simultaneously. \u2022 Through quantitative and qualitative evaluations, we demonstrate that our proposed method can effectively generate complex images by composing the multi- prompts describing object interactions and the details. 1The code will be made publicly available upon acceptance.", + "main_content": "2.1 Diffusion Models Diffusion models [Dhariwal and Nichol, 2021; Ho et al., 2020; Nichol and Dhariwal, 2021] has attracted a lot of attention as a promising class of generative models that formulates the data generation process as an iterative denoising procedure. The models take a Gaussian noise input xT \u223cN(0, I) and transform it into a sample x0 through the series of gradual denoising steps T. The sample should be distributed according to a data distribution q. Many research works focus on improving the diffusion process to speed up the sampling process while maintaining high sample quality [Nichol and Dhariwal, 2021; Karras et al., 2022]. The latent diffusion model [Rombach et al., 2022] has also been developed to address this issue and applied the diffusion process in latent space instead of pixel space to enable an efficient sampling. While the diffusion models have originally shown great performance in image generation, enabling effective image editing and image inpainting [Meng et al., 2022; Avrahami et al., 2023a], these models been successfully used in various domains, including video [Ho et al., 2022], audio [Chen et al., 2021], 3D scenes [M\u00a8 uller et al., 2023], and motion sequences [Tevet et al., 2023]. Although this paper focuses on image generation, our proposed framework may further be applied in such domains. (d) GLoD (Ours) a dog is chasing a man in a park a man wearing a blue shirt a black dog a black dog is chasing a man wearing a blue shirt in a park (b) Layout Control (a) Stable diffusion a black dog is chasing a man wearing a blue shirt in a park (c) MultiDiffusion in a park a man wearing a blue shirt a black dog black dog Husky dog Input Generated image 19 Figure 2: GLoD enables controlling global contexts (interaction between a dog and a man, their layouts) and local details (the dog is black, the man is wearing a blue shirt) independently. Local details can be specified (black dog \u2192Husky dog) while preserving the global contexts. Note that this is not image editing. We generate images from the text prompts and the layout. 2.2 Controllable Image Generation with Diffusion Diffusion models are first applied to text-to-image generative models, which generate an image conditioned on a free-form text description as an input prompt. Classifier-free guidance [Ho and Salimans, 2021] plays important role in conditioning the generated images to the input prompt. Recent textto-image diffusion models such as DALL-E 2 [Ramesh et al., 2022], Imagen [Saharia et al., 2022], and Stable Diffusion [Rombach et al., 2022] has shown remarkable capabilities in image generation. On the other hand, recent studies [Chen et al., 2023; Bar-Tal et al., 2023; Zheng et al., 2023; Patashnik et al., 2023] have stressed the inherent difficulty in controlling generated images with a text description, especially in the control over (i) object layout and (ii) visual attributes of objects. To gain more control over the object layout, some works have leveraged bounding boxes or segmentation masks as an additional input along with text prompts. Training-free layout control [Chen et al., 2023] takes a single prompt along with a layout of objects appeared in the prompt as shown in Fig. 2 (b). Object layout is given in a form of the bounding. Layout control extracts attention maps from a pre-trained diffusion model and updates the latent embeddings of the image based on an error between the input bounding boxes and the attention maps. Since this method simply uses the pre-trained diffusion model to generate images, it inherits the difficulties in control over the visual attributes of objects, i.e., the local details. Also, the generated image may largely change even if a single word is added or replaced in the prompt due to the fact that the input prompt describes both the global contexts and the local details. MultiDiffusion [Bar-Tal et al., 2023] and SceneComposer [Zeng et al., 2023] take multiple prompts along with their corresponding segmentation masks as a region as shown in Fig. 2 (c). They effectively control the object layout and visual attributes of each object. However, they cannot handle a prompt describing interactions between those objects, i.e., the global contexts. Basically, they just place a specific object described by the prompt in a certain region. Thus, if the input prompt is replaced (black dog \u2192Husky dog), the new object (Huskey dog) does not inherit the contexts (e.g., posture) from the replaced object (black dog). Unlike these methods, our method aims to control both the global contexts and the local details simultaneously. Since we treat the global contexts and the local details separately, the global contexts are preserved even if the local details are changed, as shown in Fig. 2 (d). Some studies [Farshad et al., 2023; Yang et al., 2022] focused on image synthesis from scene graphs for better control over the complex relations between multiple objects in the generated images. However, these works require costly extensive training on curated datasets. They regard a complex scene graph as an input prompt, while our approach decomposes the complex prompts into multiple simple prompts and does not require any training or finetuning. 2.3 Layered Image Generation and Editing Some recent works [Zhang et al., 2023; Li et al., 2023a; Liao et al., 2023] have proposed layered image generation and editing. They considers two layers, foreground and background, and enables to control them individually with a segLayer composition \ud835\udc402\ud835\udc52\ud835\udf03\ud835\udc65\ud835\udc61, \ud835\udc61\u0e2b\ud835\udc50\ud835\udc592 Denoising \ud835\udc65\ud835\udc61 \ud835\udc65\ud835\udc61\u22121 \ud835\udc54\ud835\udc59 \ud835\udc54\ud835\udc59 \ud835\udc54\ud835\udc54 \ud835\udc590 \ud835\udc591 \ud835\udc592 26 Figure 3: GLoD composes multiple layers. Unconditional noise and noises conditioned on global contexts (e.g., interactions) or local details (e.g., color) are assigned to separate layers (l0, l1, l2). Those layers are then composed with global guidance gg and local guidance gl. mentation mask of the foreground object. Their models are needed to be trained with proposed losses. Unlike them, our goal is to control the global contexts and the local details simultaneously without requiring the training and the accurate segmentation masks as shown in Fig. 1. Our framework enables to control local details while keeping the global contexts by composing the multiple layers, where the global layer may represent the foreground or background and the local layer may represent the details of the objects in the global layer. The global layer and the local layer are not independent but have a whole-part relationship. Our framework can also handle more than two layers as shown in Fig. 7. 2.4 Compositional Generation The compositional generation is an approach to generate the complex images by composing a set of diffusion models, with each of them modeling a certain component of the image. This approach has been an essential direction for imageto-text models because it is difficult for the current models to handle complex prompts where multiple concepts are squeezed. Recently, [Liu et al., 2022] has demonstrated successful composition of independent concepts (e.g., \u201ca bear\u201d and \u201cin a forest\u201d) by adding estimated score for each concept. [Feng et al., 2023] has also proposed another approach which can be directly merged into the cross-attention layers. Inspired by the first approach, we propose a novel method to compose whole-part concepts (e.g., \u201da bear is eating an apple\u201d and \u201dthe apple is green\u201d). 3 Method Our goal is to generate images where given global contexts and local details are reflected. In this section, we introduce Global-Local Diffusion (GLoD) to compose the global context and the local detail with pre-trained diffusion models. 3.1 Compositions of Diffusion Models We consider a pre-trained diffusion model, which takes a text prompt y \u2208Y as a condition and generates a intermediate image xt \u2208I = RH\u00d7W \u00d7C: xt\u22121 = \u03a6(xt|y). (1) The diffusion models are also regarded as Denoising Diffusion Probabilistic Models (DDPMs) where generation is modeled as a denoising process. The objective of this model is to remove a noise gradually by predicting the noise at a timestep t given a noisy image xt. To generate a less noisy image, we sample xt\u22121 until it becomes realistic over multiple iterations: xt\u22121 = xt \u2212\u03f5\u03b8(xt, t) + N(0, \u03c3t 2I), (2) where \u03f5\u03b8(xt, t) is the denoising network. [Liu et al., 2022] has revealed that the denoising network or score function can be expressed as a compositions of multiple score functions corresponding to an individual condition ci. \u02c6 \u03f5(xt, t) = \u03f5\u03b8(xt, t) + n X i=0 wi(\u03f5\u03b8(xt, t|ci) \u2212\u03f5\u03b8(xt, t)). (3) where \u03f5\u03b8(xt, t|ci) predicts a noise conditioned on ci and \u03f5\u03b8(xt, t) outputs an unconditional noise. This equation only focuses on composing individual conditions over entire image, e.g., a foreground condition like \u2019a boat at the sea\u2019 and a background condition like \u2019a pink sky\u2019. We regard \u03f5\u03b8(xt, t|ci) \u2212\u03f5\u03b8(xt, t) as a guidance gi, which guides the unconditional noise toward the noise conditioned on a given condition ci. Then, the composed denoising network is viewed as a composition of the guidance. \u02c6 \u03f5(xt, t) = \u03f5\u03b8(xt, t) + n X i=0 wigi. (4) 3.2 Layer Composition We propose GLoD to extend the above concept to a composition of a global condition and a local conditions, i.e., interactions between objects and object details. We consider a set of global conditions cg = (cg1, ..., cgk) and a set of local conditions cl = (cl1, ..., clm). We also introduce a diffusion layer l = (l0, ..., lt), where each layer contains one or more noises derived from the corresponding prompt as shown in Fig. 3. For example, with given a global prompt (cg1) and two local prompts (cl1 and cl2), an unconditional noise can Algorithm 1 GLoD sampling. Require: Diffusion model \u03f5\u03b8(xt, t), global scales wi, local scales wj, global conditions cgi, local conditions clj, object region masks Mj 1: Initialize sample xt \u223cN(0, I) 2: for t = T, . . . , 1 do 3: xt \u2190f(xt, cgi, M) \u25b7apply layout control f 4: \u03f5i \u2190\u03f5\u03b8(xt, t|cgi) \u25b7scores for global condition cgi 5: \u03f5j \u2190\u03f5\u03b8(xt, t|clj) \u25b7scores for local condition clj 6: \u03f5 \u2190\u03f5\u03b8(xt, t) \u25b7unconditional score 7: \u03f5b \u2190\u03f5i, \u03f5j \u25b7Assign \u03f5i and \u03f5j to \u03f5b 8: gg \u2190Pk i=0 wi(\u03f5i \u2212\u03f5). \u25b7global guidance-Eq. 5 9: gl \u2190Pm j=0 wjMj(\u03f5j \u2212\u03f5b). \u25b7local guidance-Eq. 6 10: xt\u22121 \u223cN(xt \u2212(\u03f5 + gg + gl), \u03c3t2I) \u25b7sampling 11: end for be assigned on a layer l0, a layer l1 contains a noise derived from the global prompt, and layer l2 contains noises obtained from the local prompts. We compose the assigned noises with two ways of guidance: (i) global guidance, which guides the image with global conditions by the following equation: gg = \u03f5\u03b8(xt, t|cg) \u2212\u03f5\u03b8(xt, t), (5) where the unconditional noise is always a base noise \u03f5b. This is also well known as Classifier-free guidance [Ho and Salimans, 2021]. With two global conditions, their global guidance is summed as a global-global composition. The classifier-free guidance works well on the global-global compositions, while it does not work effectively when we compose the global condition and the local condition since their conditions have some overlap. Thus, we newly propose (ii) local guidance, which guides an object on the base layer b conditioned on a condition cb with a local condition cj. gl = Mj(\u03f5\u03b8(xt, t|cj) \u2212\u03f5\u03b8(xt, t|cb)), (6) where Mj \u2282{0, 1}H\u00d7W is a region mask of j-th region corresponding to the condition cj. In Fig. 3, two local guidance is added to the global guidance as a global-local composition. The intuition behind the local guidance is that an image region guided with a word \u2019dog\u2019 in a global prompt can be regarded as an unconditional \u2019dog\u2019 and guided with a local prompt by emphasizing the difference between the global and local conditions. In the end, decomposed global prompts and local prompts are effectively composed by our proposed guidance. 3.3 Layout Control Without any layout control, objects described in a given prompt appear somewhere in a generated image. To effectively compose the global noise and the local noise, we use Training-free layout control [Chen et al., 2023]. More specifically, we use the backward guidance to control the layout of the objects in their layer before computing the global noise. Algorithm 1 provides the pseudo-code for composing diffusion noises with GLoD. Our method composes noises obtained with pre-trained diffusion models during inference without any additional training or finetuning. 4 Results 4.1 Evaluation Metrics We build a new test set to evaluate the controllability over the global contexts and local details in the image generation rather than the image quality. The test set contains 2500 samples where each sample contains a full text (e.g., \u2019a beard man is talking to a woman with earrings.\u2019), a global text (i.e., \u2019a man is talking to a woman.\u2019), local texts for a subject and an object (\u2019a beard man.\u2019 and \u2019a woman with earrings.\u2019), and a layout of the subject and the object. This test set design allows us to compute an alignment score using CLIP similarity [Radford et al., 2021]. We compute a global alignment score Sg from an entire image and the global text, and similarly a local alignment score for the subject Sls and the object Slo from a region of them in the image and the corresponding local text. We also introduce an infection score Si that indicates how much undesirable effects the subject and object have. This can be an important metric since a prompt for an object A may also unintentionally affect another object B. We compute the similarity between a region of A and a prompt for B, similarly for the region of B and the prompt for A, and average them. See more details in Appendix A. 4.2 Implementation Details We evaluate our method on the following conditions. In all experiments, we used Stable Diffusion [Rombach et al., 2022] as our diffusion model, where the diffusion process is defined over a latent space I = R64\u00d764\u00d74, and a decoder is trained to reconstruct natural images in higher resolution [0, 1]512\u00d7512\u00d73. We use the public implementation of Stable Diffusion by HuggingFace, specifically the Stable Diffusion v2.1 trained on the LAION-5B dataset [Schuhmann et al., 2022] as the pre-trained image generation model. We also set Euler Discrete Scheduler [Karras et al., 2022] as the noise scheduler. As our layout control (see 3.3) we use the backward guidance [Chen et al., 2023]. All the experiments are running on one A30 GPU. We compare our GLoD with other state-of-the-art trainingfree methods, including Training-free layout control [Chen et al., 2023] and MultiDiffusion [Bar-Tal et al., 2023], and strong baselines, including OpenAI DALL-E 2 [Ramesh et al., 2022] and Adobe Firefly [Adobe, 2023]. We use the publicly available official codes and websites, and follow their instructions. 4.3 Image Generation with GLoD GLoD for a single object. We first demonstrate global-local compositions for a single object using GLoD for a better understanding, while our main targets are more complex scenes including multiple objects as shown in Fig. 1. In Fig. 4, our method generates diverse samples which comply with compositions of a global context (e.g., a cat is walking) and a local Global context: \u201ca cat is {g}.\u201d walking sitting jumping lying \u201ca {g} cat is {l}.\u201d Local detail: yellow pink long-haired Local Composed Local Composed Local Composed Layout 35 Figure 4: GLoD for a single object. The images in the first column and \u2019Local\u2019 columns are sampled only from the global context (global images) and the local detail (local images) as an input prompt, respectively. The images in \u2019Composed\u2019 columns are sampled using our method, which effectively applies local detail (e.g., long-haired) to the object in the image while preserving the global contexts (i.e., object layouts and object postures). (c) Stable Diffusion (a) DALL-E 2 (d) Layout Control (e) Ours (b) Firefly \u201ca black sheep standing and another white sheep sitting.\u201d + Layout Global Local 36 Figure 5: GLoD for multiple objects. Our method (e) can control attributes of each sheep, while the other methods fail to reflect the specified attributes to the correct targets. detail (e.g., a walking cat is pink). The images in the first column and \u2019Local\u2019 columns are sampled only from the global context (global images) and the local detail (local images) as an input prompt, respectively. The images in \u2019Composed\u2019 columns are sampled using our method, where the goal is to apply local detail (i.e., specified visual attribute) to the object in the image while preserving the global contexts. Although we generated all the images with the same seed, the posture of the cat is largely different in the corresponding global image and local image (e.g., the image of \u2019a cat is sitting\u2019 vs the image of \u2019a sitting cat is pink\u2019). Our method can effectively generate the images from the global context and the local detail along with the layout (see \u2019Composed\u2019). Note that this is image generation not the editing. The generated image retains most of the global contexts, including postures and head directions. In a few cases, the visual attribute of the object changes only partially (e.g., composition of \u2019a cat is lying\u2019 and \u2019a lying cat is yellow\u2019). GLoD for multiple objects. We then compare GLoD with the other state-of-the-art methods in generating more complex scene including multiple objects. In Fig. 5, we try to generate images of a complex scene where there are multiple objects in the same category (sheep in this case) and each of them has different attributes (a sheep is black and standing, another sheep is white and sitting). We show first four images generated by official web application of (a) DALLE 2 and (b) Firefly at first and second column, respectively. They can generate high-quality images, but they often fail to reflect the specified attributes to the correct targets. We then find four seeds which generate failure samples of Trainingfree layout control and generate images using Stable diffusion and our method with those seeds. Both the layout control and Methods Sg \u2191 Sls \u2191 Slo \u2191 Sgl \u2191 Si \u2193 Stable Diffusion 24.2 \u2212 \u2212 \u2212 \u2212 Layout control 24.4 21.2 20.5 22.6 15.6 GLoD (ours) 24.6 23.3 20.9 23.3 14.5 MultiDiffusion 21.2 24.5 24.2 22.7 14.9 Table 1: Evaluation of controllability over the global context (Sg) and the local details for a subject and an object (Sls and Slo). Sgl represents an average of the global alignment score Sg and the local alignment scores Sl{s,o}. Si denotes an infection score that indicates how much undesirable effects the subject and the object have. our method use the same object layout as an additional input. We set \u2019a black sheep and another white sheep\u2019 as a global context, \u2019a sheep is black and standing\u2019 as a local detail of a sheep, and \u2019a sheep is white and sitting\u2019 as a local detail of the another sheep. We make the global context similar to the original prompt to compare the generated images easily. Figure 5 shows that our method effectively controls the attributes of each object in the image. Quantitative evaluation. Table 1 shows quantitative evaluation of controllability over the global context and the local details. Our method improves the local alignment scores Sls and Slo while keeping the global alignment score Sg almost the same. Compared to Multi Diffusion, the proposed method shows a superior overall alignment score Sgl. Note that Multi Diffusion cannot handle the global prompt describing the object interaction, and thus shows significantly lower global alignment score. The local alignment score for an object Slo lags considerably behind that for a subject Sls. We found that the subjects (e.g., woman) are often turning their backs on as a result of complying with the given global prompt (e.g., a man is talking with a woman). Therefore, the alignment score Slo becomes low because some of specified attributes are not visible in such cases. GLoD also improves the infection score Si against the baselines. This result indicates that our method reduces the mis-alignment between the prompt and the generated image. GLoD for complex scenes. Figure 1 shows other samples depicting more complex scenes, where we give an interaction between the objects as a global context and also specify the local details. Instead of giving a complex prompt such as \u2019a Husky wearing glasses is playing with a black short-hair cat\u2019, we decompose and handle them separately (i.e., \u2019a Husky is playing with a cat\u2019, \u2019a Husky wearing glasses\u2019, and \u2019a cat is short-hair and black\u2019) to effectively synthesis complex visual scenes. 4.4 Global-Local Composition We compare our global-local composition with a conventional composition [Liu et al., 2022]. Since the conventional composition aims to compose independent concepts (e.g., foreground and background) by adding estimated score for each concept, it often fails to compose overlapped concepts (e.g., \u2019running cat\u2019 and \u2019white cat\u2019) as shown in Fig. 6 (top). Our layer composition can effectively compose such overlapped concepts as shown in Fig. 6 (bottom). GLoD (Ours) Conventional composition \u201ca cat is running towards a ball\u201d \u201ca cat is white\u201d + Weight Figure 6: Comparison between our global-local composition (bottom) and the conventional composition (top). Our method can change the detail of the object while preserving the global context by composing two prompts, whereas the conventional method often fails because they regard the prompts as two independent concepts. a girl, bust up photo a girl's face with earring an earring with a big blue diamond a girl has eyeglasses a girl wearing a cap a logo on a cap 3 Figure 7: GLoD also enables layered image editing, where new objects can be added on a certain region using additional prompts. Final image (right end) can be generated in one inference by composing six prompts. 4.5 Image Editing with GLoD Figure 7 shows edited image samples with GLoD. GLoD enables layered image editing, where new objects can be added on a certain region using additional prompts. Details of the objects on a base layer (e.g., earring) can be guided with the prompts on the upper layer (e.g., adding a diamond) with our layer composition. Final image (right end) can be generated in one inference by composing multiple prompts (six in this case). 5 Conclusion Image generation with simultaneous control over global contexts and local details is still an open challenge. We proposed GLoD, a simple and yet effective framework which composes a global prompt describing an entire image and local prompts specifying object details with a pre-trained diffusion model. Our framework can handle both global-global compositions and global-local compositions without requiring any additional training or finetuning. Through the qualitative and quantitative evaluations, we demonstrated that GLoD effectively generates images that include interactions between objects with detailed visual control, improving the alignment scores and reducing the undesirable affects. A limitation we found is that the object appearance may change only partially when the latent of the object is significantly different between the global and the local." + }, + { + "url": "http://arxiv.org/abs/2404.15027v1", + "title": "Three dimensional end-to-end simulation for kilonova emission from a black-hole neutron-star merger", + "abstract": "We study long-term evolution of the matter ejected in a black-hole\nneutron-star (BH-NS) merger employing the results of a long-term\nnumerical-relativity simulation and nucleosynthesis calculation, in which both\ndynamical and post-merger ejecta formation are consistently followed. In\nparticular, we employ the results for the merger of a $1.35\\,M_\\odot$ NS and a\n$5.4\\,M_\\odot$ BH with the dimensionless spin of 0.75. We confirm the finding\nin the previous studies that thermal pressure induced by radioactive heating in\nthe ejecta significantly modifies the morphology of the ejecta. We then compute\nthe kilonova (KN) light curves employing the ejecta profile obtained by the\nlong-term evolution. We find that our present BH-NS model results in a KN light\ncurve that is fainter yet more enduring than that observed in AT2017gfo. This\nis due to the fact that the emission is primarily powered by the\nlanthanide-rich dynamical ejecta, in which a long photon diffusion time scale\nis realized by the large mass and high opacity. While the peak brightness of\nthe KN emission in both the optical and near-infrared bands is fainter than or\ncomparable to those of binary NS models, the time-scale maintaining the peak\nbrightness is much longer in the near-infrared band for the BH-NS KN model. Our\nresult indicates that a BH-NS merger with massive ejecta can observationally be\nidentified by the bright and long lasting ($>$two weeks) near-infrared\nemission.", + "authors": "Kyohei Kawaguchi, Nanae Domoto, Sho Fujibayashi, Kota Hayashi, Hamid Hamidani, Masaru Shibata, Masaomi Tanaka, Shinya Wanajo", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE", + "gr-qc" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Neutron star (NS) mergers are known to be among the most promising targets of the ground-based gravitational-wave (GW) detectors (LIGO: Aasi et al. 2015, Virgo: Acernese et al. 2015, KAGRA: Kuroda 2010) as well as one of the most important sources of high-energy astrophysical transients, such as gamma-ray bursts (GRB, Paczynski 1991; Nakar 2007; Berger 2014; Abbott et al. 2017c), kilonovae (KN, Li & Paczynski 1998; Kulkarni 2005; Metzger et al. 2010; Kasen et al. 2013; Tanaka & Hotokezaka 2013), jet heated cocoons (Nakar & Piran 2017; Hamidani & Ioka 2023a,b), and synchrotron flares (Nakar & Piran 2011; Hotokezaka & Piran 2015; Hotokezaka et al. 2018; Margalit & Piran 2020). NS merg- ers are also considered to be important production sites of elements heavier than iron in the universe (Lattimer & Schramm 1974; Eich- ler et al. 1989; Freiburghaus et al. 1999; Cowan et al. 2021). The first detection of GWs from a binary neutron star (BNS) merger (GW170817; Abbott et al. 2017a) and its multi-wavelength electro- magnetic (EM) counterparts (Abbott et al. 2017b) demonstrated that those simultaneous observations will provide a valuable oppor- tunity to extend our knowledge of fundamental physics in the extreme \u2605E-mail: kyohei.kawaguchi@aei.mpg.de (strongly self-gravitating, high-density, and high-temperature) envi- ronments. Among NS mergers, the mergers of black-hole neutron-star (BH- NS) binaries can provide us with interesting insights that are different from BNS mergers. While the mass ratios of the compact stars in BNS binaries are expected to close to unity, BH-NS binaries can be more asymmetric in the mass ratio, and hence, will provide valuable opportunity to study higher-order GW multipole moments (Abbott et al. 2021). Also, if the NS is tidally disrupted before reaching the innermost circular orbit of the BH, an applicable amount of NS matter can remain outside the remnant BH and be ejected from the system. Such ejecta formed during the NS tidal disruption as well as the matter subsequently ejected during the evolution of the remnant BH-tours system will be the source of various EM counterparts to the GW event. In addition, since BH-NS mergers can potentially produce a large amount of very low (\u22720.1) electron fraction (\ud835\udc4c\ud835\udc52) ejecta, the nucleosynthetic abundances can be different to those in the case of BNS mergers. In fact, it has been pointed out that BH-NS mergers can provide an explanation to the observed elemental abundances of a subclass of \ud835\udc5f-process-enhanced stars, so-called \"actinide-boosted\" stars (Wanajo et al. 2022). To extract the physical information from the observation of EM counterparts, accurate modeling of the light curves and spectra con- \u00a9 20XX The Authors arXiv:2404.15027v1 [astro-ph.HE] 23 Apr 2024 2 K. Kawaguchi et al. sistent with the source properties are crucial. Since the detection of GW170817, light curve modeling of EM counterparts, particularly, for KNe has been significantly developed in this decade. In particu- lar, the studies by employing numerical-simulation-based/motivated ejecta profiles and by performing radiative transfer (RT) simulations with realistic heating rates and/or detailed opacity tables enable us to directly connect the properties of the progenitor binary to the observ- ables (e.g., Kasen et al. 2013, 2015; Barnes et al. 2016; Wollaeger et al. 2018; Tanaka et al. 2018; Wu et al. 2019; Kawaguchi et al. 2018; Hotokezaka & Nakar 2020; Kawaguchi et al. 2020; Korobkin et al. 2021; Bulla et al. 2021; Zhu et al. 2021; Barnes et al. 2021; Nativi et al. 2020; Kawaguchi et al. 2021; Wu et al. 2022; Just et al. 2022; Just et al. 2023). Previous studies showed that the complex ejecta profile in the presence of the multiple ejecta components of different mass ejection processes induces significant spatial depen- dences in radioactive heating as well as strong geometrical effects in radiative transfer, which have great impacts on the resulting light curves (Kasen et al. 2015; Kawaguchi et al. 2018; Kawaguchi et al. 2020; Bulla 2019; Zhu et al. 2020; Darbha & Kasen 2020; Korobkin et al. 2021; Almualla et al. 2021; Kedia et al. 2023). Hence, the employment of the realistic ejecta profile consistently taking mul- tiple ejecta components into account is essential for the accurate prediction of KN light curves. One of the important missing links for the accurate prediction of KNe is the long-term hydrodynamics evolution of ejecta after the formation. While the ejecta formation takes place on a time scale of \u22721\u201310 s after the onset of a merger (Hayashi et al. 2022, 2023), the KN emission peaks in a much longer time scale of 0.1\u201310 d (Li & Paczynski 1998; Kulkarni 2005; Metzger et al. 2010; Kasen et al. 2013; Tanaka & Hotokezaka 2013), at which the homologous ex- pansion of ejecta has been achieved. Since ejected matter can be accelerated by the ejecta pressure gradient and interact with different ejecta components during these epochs, the ejecta profile at the time of KN emission is non-trivial just from the ejecta properties at the time of formation. In fact, Rosswog et al. (2014) and Grossman et al. (2014) performed pseudo-Newtonian hydrodynamics simulations for BNS mergers, and studied the long-term evolution of the dynami- cal ejecta component until it reached the homologously expanding phase. They found that the thermal pressure induced by radioactive heating in ejecta significantly changes the ejecta morphology (see also Foucart et al. (2021)). Fern\u00e1ndez et al. (2015) and Fern\u00e1ndez et al. (2017) performed long-term simulations for BH-NS merg- ers to investigate the effect of the interplay between the dynamical and post-merger components and found that the interaction of the multiple ejecta components can modify the ejecta profile. Thus, to accurately predict KN light curves, it is also important to follow the hydrodynamics evolution of the multiple ejecta components until the homologously expanding phase. Recently, the development of numerical simulation techniques and the significant increase in the computational resources have enabled us to consistently follow the NS mergers from the onset of the merger to the time that ejecta formation saturates (Kiuchi et al. 2022; Fu- jibayashi et al. 2023, 2020b; Shibata et al. 2021; Hayashi et al. 2022; Kiuchi et al. 2022; Fujibayashi et al. 2023; Hayashi et al. 2023; Kiuchi et al. 2023; Just et al. 2023; Gottlieb et al. 2023; Kiuchi et al. 2024). In this paper, we study the KN emission associated with a BH-NS merger employing the results obtained by the numerical-relativity (NR) simulation and nucleosynthesis calculation consistently fol- lowing the entire ejecta formation from the merger (Hayashi et al. 2022, 2023; Wanajo et al. 2022). In particular, we focus on the KN emission from \u22481 d after the onset of the merger for the model of a large amount of dynamical ejecta with \u22480.04\ud835\udc40\u2299in this paper.1 This paper is organized as follows: In Section 2, we describe the method employed in this study. In Section 3, we describe the BH-NS model we study in this work. In Section 4, we present the property of the ejecta obtained by the long-term hydrodynamics evolution. In Section 5, we present the KN light curve obtained by RT simulations. Finally, we discuss the implication of this paper in Section 6. Throughout this paper, \ud835\udc50denotes the speed of light.", + "main_content": "2.1 hydrodynamics simulation In a BH-NS merger, matter ejected by various mechanisms is expected to experience hydrodynamics interactions between different ejecta components before eventually reaching a homologous expansion phase at \u223c0.1 d (Kawaguchi et al. 2021). In order to obtain the spatial profile of the rest-mass density, elemental abundances, and radioactive heating rate after 0.1 d, which are necessary for accurate prediction of KN, we perform hydrodynamics simulations using the outflow data obtained by NR simulations as boundary conditions, as in our previous studies. To distinguish it from the NR simulation, the present hydrodynamics simulation is referred to as the HD simulation in this paper. The simulation code for the HD simulation is a 3D extension of the code developed in our previous studies (Kawaguchi et al. 2021, 2022, 2023). This code solves the relativistic Euler equations under a spherical coordinate system. In order to incorporate the effect of gravity, a fixed background metric for a non-rotating black hole expressed in isotropic coordinates is used. See Appendix A for the formulation of the basic equations. The effect of radioactive heating is incorporated in the same way as in the previous studies (Kawaguchi et al. 2021, 2022, 2023). See Appendix B for the method of particle tracing used to employ the nucleosynthesis results in the HD simulation. We note that the equatorial symmetry is imposed for the HD simulation following the setup of the NR simulation. For the equation of state (EOS), we consider both contributions from gas and radiation: the total pressure \ud835\udc43is given by \ud835\udc43= \ud835\udc43gas+\ud835\udc43rad with \ud835\udc43gas = \ud835\udc5bB\ud835\udc58B\ud835\udc47and \ud835\udc43rad = \ud835\udc4erad\ud835\udc474/3, where \ud835\udc5bB, \ud835\udc47, \ud835\udc58B, and \ud835\udc4erad are the baryon number density, temperature, Boltzmann constant, and radiation density constant, respectively. Here, we simplified the gas 1 During the submission process of this paper, LIGO and Virgo have detected GWs plausibly from a BH-NS merger (The LIGO Scientific Collaboration et al. 2024b). The alert shows that the system is likely to contain a NS, and the mass of the other object is likely to be in between 3 \ud835\udc40\u2299and 5 \ud835\udc40\u2299with a 50% probability. The probability for the matter outside the remnant object to be present after the merger is also high (> 99%). Hence, the system can be by chance similar to the BH-NS model studied in this paper (a binary of a 1.35 \ud835\udc40\u2299NS with the radius of \u224813.2 km and a 5.4 \ud835\udc40\u2299BH with the dimensionless spin of 0.75). We note that the amount of the dynamical ejecta is broadly the same (\u22480.04 \ud835\udc40\u2299) also for a BH-NS merger with the same NS mass, NS radius, and dimensionless BH spin but with a larger BH mass (8.1 \ud835\udc40\u2299) (Hayashi et al. 2022). The result of this paper indicates that, if it is a BH-NS merger with significant amount of the dynamical ejecta formation, this GW event may be associated with a kilonova of which near-infrared emission is bright and long-lasting. MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 3 pressure assuming that atoms are fully ionized with an electron fraction of unity, and the gas pressure is dominated by the contribution from electrons (since the average atomic mass number is expected to be much larger than unity). We note that, although this simplification may overestimate the gas pressure component, the contribution of the gas pressure is found to be nevertheless subdominant. In fact, we confirm that the resulting ejecta profiles as well as the KN light curves are essentially unchanged even if we employ the ideal-gas EOS with the adiabatic index of \u0393 = 4/3, which corresponds to the case that the radiation pressure dominates. Note that the magnetic field effects are not taken into account in our present HD simulations. As a consequence, and due to the coarse grid resolution in the polar region, the relativistic jet outflow launched in the NR simulation is not well resolved in the present HD simulations. The previous study suggests that the presence of the jet may affect the ejecta profile and hence the KN light curves near the jet axis (Nativi et al. 2020; Klion et al. 2021). Since resolving the propagation of the relativistic jet in long-term three-dimensional simulations requires high computational costs, we leave the investigation of the effect of the jet for a future work. We employ the same time origin for the HD simulations as in the NR simulations. The uniform grids with \ud835\udc41\ud835\udf03and \ud835\udc41\ud835\udf19grid points are prepared for the polar angle \ud835\udf03and the longitudinal angle \ud835\udf19, respectively. For the radial direction, the following non-uniform grid structure is employed; for a given \ud835\udc57-th radial grid point ln \ud835\udc5f\ud835\udc57= ln \u0012\ud835\udc5fout \ud835\udc5fin \u0013 \ud835\udc57\u22121 \ud835\udc41\ud835\udc5f + ln \ud835\udc5fin, \ud835\udc57= 1 \u00b7 \u00b7 \u00b7 \ud835\udc41\ud835\udc5f+ 1, (1) where \ud835\udc5fin and \ud835\udc5fout denote the inner and outer radii of the computational domain, respectively, and \ud835\udc41\ud835\udc5fdenotes the total number of the radial grid points. In the present work, we employ (\ud835\udc41\ud835\udc5f, \ud835\udc41\ud835\udf03, \ud835\udc41\ud835\udf19) = (1024, 64, 128), and \ud835\udc5fin and \ud835\udc5fout are initially set to be 3, 000 km and 103 \ud835\udc5fin, respectively. We confirm that this grid resolution is sufficiently high enough for our purpose of the study by checking the results of the ejecta profile and KN light curves being semi-quantitatively unchanged for the HD simulation with (\ud835\udc41\ud835\udc5f, \ud835\udc41\ud835\udf03, \ud835\udc41\ud835\udf19) = (512, 32, 64) (less than 10% and 3% difference in the total bolometric luminosity at 1 d and 2 d, respectively). The hydrodynamics properties of the outflow are extracted at \ud835\udc5f= \ud835\udc5fext in the NR simulations of Hayashi et al. (2022, 2023), and the time-sequential data are employed as the inner boundary condition of the present HD simulations. The outflow data obtained from the NR simulation run out at \ud835\udc61> 1 s, and after then, the HD simulation is continued by setting a very small floor value to the rest-mass density of the inner boundary. To follow the evolution of ejecta even after the high-velocity edge of the outflow reaches the outer boundary of our HD simulation, the radial grid points are added to the outside of the original outer boundary, while at the same time the innermost radial grid points are removed so as to keep the total number of the radial grid points. By this prescription, the value of \ud835\udc5fin is increased in the late phase of the HD simulations. The outermost radial grids are added so that the location of the outer radial boundary, \ud835\udc5fout, is always 103\ud835\udc5fin. Note that the region of \ud835\udc5f\u227310\u22123\ud835\udc50\ud835\udc61is always covered with the computational domain up to \ud835\udc61= 0.1 d in the HD simulations. The so-called Courant\u2013Friedrichs\u2013Lewy (CFL) condition restricts the time steps in the HD simulation to ensure the numerical stability. For our setup, the time interval should be approximately less than the smallest value among \u0394\ud835\udc5fmin/\ud835\udc50, \ud835\udc5fin\u0394\ud835\udf03min/\ud835\udc50, and \ud835\udc5finsin\ud835\udf03min\u0394\ud835\udf19min/\ud835\udc50 with \ud835\udf03min, \u0394\ud835\udc5fmin, \u0394\ud835\udf03min, and \u0394\ud835\udf19min being the minimum cell center value of the \ud835\udf03coordinate and the minimum cell sizes of \ud835\udc5f, \ud835\udf03, and \ud835\udf19directions, respectively. For the present grid setup, the most strict constraint comes from the last condition of \ud835\udc5finsin\ud835\udf03min\u0394\ud835\udf19min/\ud835\udc50, and this restricts the time interval to be so small that the computational costs becomes practically quite high. To relax this condition, we average over the conservative variables of hydrodynamics in the direction of \ud835\udf19for all the cells located in \ud835\udf03\u2264\ud835\udf03c for each sub-step of the evolution. By this prescription, the HD simulation is kept numerically stable if the time interval is within \ud835\udc5finsin\ud835\udf03c\u0394\ud835\udf19min/\ud835\udc50. For the present study, we choose \ud835\udf03c to be \ud835\udf0b/24, while we confirm that the resulting LCs are essentially unchanged even if we employ \ud835\udf03c = \ud835\udf0b/12. 2.2 radiative-transfer simulation The light curves of KNe are calculated using a wavelength-dependent RT simulation code (Tanaka & Hotokezaka 2013; Tanaka et al. 2017, 2018; Kawaguchi et al. 2020; Kawaguchi et al. 2021). In this code, the photon transfer is simulated by a Monte Carlo method for given ejecta profiles composed of the density, velocity, and elemental abundance under the assumption of the homologous expansion. The timedependent thermalization efficiency is taken into account following an analytic formula derived by Barnes et al. (2016). The ionization and excitation states are determined under the assumption of the local thermodynamic equilibrium (LTE) by using the Saha\u2019s ionization and Boltzmann excitation equations. For the photon-matter interaction, bound-bound, bound-free, and free-free transitions, and electron scattering are taken into account for the transfer of optical and infrared photons (Tanaka & Hotokezaka 2013; Tanaka et al. 2017, 2018). The formalism of the expansion opacity (Friend & Castor 1983; Eastman & Pinto 1993; Kasen et al. 2006) and the new line list derived in Domoto et al. (2022) are employed for the bound-bound transitions. In this line list, the atomic data of VALD (Piskunov et al. 1995; Kupka et al. 1999; Ryabchikova et al. 2015) or Kurucz\u2019s database (Kurucz & Bell 1995) is used for \ud835\udc4d= 20\u201329, while the results of atomic calculations from Tanaka et al. (2020) are used for \ud835\udc4d= 30\u201388. For Sr II, Y I, Y II, Zr I, Zr II, Ba II, La III, and Ce III, which are the ions producing strong lines, the line data are replaced with those calibrated with the atomic data of VALD and NIST databases (Kramida et al. 2021). Note that, since our atomic data include only up to the triple ionization for all the ions, the early phase of the light curves (\ud835\udc61\u22640.5 d) may not be very reliable due to high ejecta temperature (see Banerjee et al. 2020 for the work taking the opacity contribution from higher ionization states into account). The RT simulations are performed from \ud835\udc61= 0.1 d to 30 d employing the density and internal energy profiles of the HD simulations at \ud835\udc61= 0.1 d and assuming the homologous expansion for \ud835\udc61> 0.1 d. The spatial distributions of the heating rate and elemental abundances are determined by the table obtained by the nucleosynthesis calculations referring to the injected time and angle of the fluid elements. Note that, as an approximation, the elemental abundances at \ud835\udc61= 1 d are used during the entire time evolution in the RT simulations to reduce the computational cost, but this simplified prescription gives an only minor systematic error on the resultant light curves as illustrated in Kawaguchi et al. (2021). A three-dimensional cylindrical grid is applied for storing the local elemental abundances and radioactive heating rate as well as for solving the local temperature and opacity. The 50, 50, and 32 cells are set to the cylindrical radius, vertical, and longitudinal directions, which cover the domain with the coordinate ranges of (0, 0.6 \ud835\udc50\ud835\udc61), (0, 0.6 \ud835\udc50\ud835\udc61), and (0, 2\ud835\udf0b), respectively. We confirm that the resulting light curves are unchanged by changing each cell numbers from 50, 50, and 32 cells to 40, 40, and 28 cells or changing the maximum cylindrical radius and vertical coordinate ranges from 0.6 \ud835\udc50\ud835\udc61to 0.75 \ud835\udc50\ud835\udc61. MNRAS 000, 1\u201320 (20XX) 4 K. Kawaguchi et al. 3 THE BH-NS MODEL In this work, we employ the NR outflow profiles and nucleosynthetic data obtained in Hayashi et al. (2022, 2023) and Wanajo et al. (2022) as the input for the HD simulations. In particular, we employ the outflow data of model Q4B5H in Hayashi et al. (2022). For this model, a BH-NS binary of which the NS mass, BH mass and dimensionless spin are initially 1.35 \ud835\udc40\u2299, 5.4 \ud835\udc40\u2299(thus 4 times larger than the NS mass), and 0.75, respectively, is considered with the DD2 EOS (Banik et al. 2014). The poloidal magnetic field with the maximum strength of 5 \u00d7 1016 G is initially set in the NS, while the resulting ejecta profile is not sensitive to the initial magnetic-field strength and configuration (Hayashi et al. 2023). We set 6.6 \ud835\udc40\u2299as the BH mass of the metric employed in the HD simulations, which approximately agrees with the summation of the remnant BH mass and matter outside the BH measured at \ud835\udc61= 0.1 s. For model Q4B5H, the NS experiences significant tidal disruption before it reaches the inner-most stable circular orbit of the binary (\ud835\udc61\u224810 ms). This leads to the formation of massive ejecta and torus around the remnant BH. Ejecta formed at the time of the NS tidal disruption, which often referred to as the dynamical ejecta, are concentrated in the vicinity of equatorial plane and exhibit significant non-axisymmetric geometry. The dynamical ejecta typically have low electron fraction (0.03\u20130.07) because those are driven primarily by gravitational torque and do not go through significant weak processes in the merger. Subsequently, the magnetic field is amplified in the remnant torus, and the effective viscosity is induced by the magnetohydrodynamical turbulence, driven by the magnetorotational instability (Balbus & Hawley 1998). Initially, viscous heating in the torus is balanced with neutrino cooling. As the disk rest-mass density and temperature drop due to the expansion driven by angular momentum transport, neutrino cooling becomes inefficient, and viscosity-driven mass ejection sets in (\ud835\udc61\u22480.2\u20130.3 s). In parallel, magneto-centrifugal force in the central region might play a role for enhancing mass ejection. Mass ejection in this stage, which is referred to as the post-merger mass ejection, lasts for \u223c1\u201310 s. In contrast to the dynamical ejecta, since thermal and weak processes play important roles during the post-merger stages, the electron fraction of ejecta has a broad distribution in the range of 0.1\u20130.4 with its peak being 0.24. For model Q4B5H, the masses of the dynamical and post-merger ejecta are found to be 0.045 \ud835\udc40\u2299and 0.028 \ud835\udc40\u2299, respectively, at the end time of the NR simulation. It is worth being remarked that the combination of dynamical and post-merger ejecta approximately reproduces a solar-like \ud835\udc5f-process pattern (Wanajo et al. 2022). In this paper, we study the ejecta and KN property for one case of a BH-NS merger among available NR results as the first step for the end-to-end kilonova simulation. However, we should note that the disk and ejecta masses formed in BH-NS mergers can have large variety depending on the binary parameters, such as the BH and NS masses, BH spin, and NS radius (Rosswog 2005; Shibata & Taniguchi 2008; Etienne et al. 2009; Lovelace et al. 2013; Kyutoku et al. 2015; Foucart et al. 2018), as well as the adopted EOS (Hayashi et al. 2023). For example, the smaller amount of disk and ejecta would be formed for the case that the NS radius is smaller due to softer EOS, such as the SFHo EOS (Steiner et al. 2013; Hayashi et al. 2023). Hence, the resulting property of the KN light curves can also have a large diversity. Therefore, we emphasize that the ejecta and KN property found for model Q4B5H with the DD2 EOS may not be universal property for every case of BH-NS mergers, and we leave the investigation of the binary parameter and EOS dependences for a future work. 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.01 0.1 1 10 100 1000 total ut<-1 input Meje [Msun] t [s] Figure 1. Time evolution of the total rest mass in the computational domain of the HD simulation (the purple curve). The green curve denotes the same as for the purple curve but only for the matter which satisfies the geodesic criterion (\ud835\udc62\ud835\udc61< \u22121 where \ud835\udc62\ud835\udc61is the lower time component of the four velocity). The blue dashed curve with the label \u201cinput\" denotes the rest mass obtained by integrating the mass flux of the NR outflow data which is employed as the inner boundary condition of the HD simulation. The black dashed line denotes the time at which the NR outflow data run out. 4 RESULTS: HYDRODYNAMICS SIMULATION 4.1 Ejecta mass evolution Figure 1 shows the total rest mass in the computational domain as a function of time. We can consider that the ejecta has reached the homologously expanding phase at \ud835\udc61= 0.1 d, because the total internal energy of ejecta is smaller by 4 order of magnitudes than the total kinetic energy. In general, two distinct ejecta components are seen in Figure 1. One found in \ud835\udc61in \u223c0.1 s corresponds to the dynamical ejecta, and the other found in \ud835\udc61in \u22730.5 s corresponds to the post-merger ejecta. After the NR outflow data run out at \ud835\udc61\u22481 s, we impose a floor rest-mass density value to the inner boundary. It is clearly seen in Figure 1 that the total mass in the computational domain decreases after that time, indicating that the matter is artificially falling back and escaping through the inner boundary. This happens because the pressure support from the inner boundary vanishes after the outflow data run out. However, the total mass of the matter with gravitationally unbound orbits remains increasing even after the time when the NR outflow data run out, as the consequence of the acceleration of the matter in the presence of the thermal pressure gradient. After \ud835\udc61\u2248100 s, approximately all the ejecta matter remaining in the computational domain becomes gravitationally unbound, and the value of the total mass in the computational domain converges to 0.063 \ud835\udc40\u2299. This value is smaller than the ejecta mass estimated in Hayashi et al. (2023) by \u22480.01 \ud835\udc40\u2299. We interpret this discrepancy as a consequence of the mismatch in the employed EOS between the NR and HD simulations and the inconsistency of the matter flux at the inner boundary. In fact, Fern\u00e1ndez et al. (2015) show similar results: they performed BH-disk simulations to follow the formation of the post-merger wind ejecta and used the extracted ejecta property as MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 5 the inner boundary condition of the subsequent simulation for longterm ejecta evolution in the same manner as our present work. They found that the mass of the post-merger wind ejecta which becomes gravitationally unbound in the subsequent simulation decreases by a factor of \u22482 from the values estimated in the BH-disk simulations. They interpreted this difference as a consequence of the discrepancy between the stresses at the inner boundary and those that would be obtained in a self-consistent simulation. Nevertheless, by performing the HD simulation with artificially modified inner boundary conditions, we confirmed that our main results are essentially the same and the modification to the resulting KN light curves is only minor: we perform a HD simulation in which the ejecta injection is sustained with the final value of the mass flux at 1 s after the NR outflow data run out. By this prescription, the total ejecta mass in the HD simulation at the homologously expanding phase increases by 0.01 \ud835\udc40\u2299, but the bolometric luminosity increases only at most \u224810% since the unbound matter increased by this prescription has the velocity only less than \u22720.05 \ud835\udc50and hence has a long diffusion time scale, which gives a minor contribution to the brightness of the emission. 4.2 Ejecta profiles at homologously expanding phase Figures 2, 3, and 4 show the rest-mass density and electron fraction (\ud835\udc4c\ud835\udc52) profiles of ejecta with two-dimensional various cross sections at \ud835\udc61= 0.1 d obtained by the HD simulation. Here, the value employed as the initial condition of the nucleosynthesis calculation is shown in the \ud835\udc4c\ud835\udc52profile (see Appendix B and Wanajo et al. (2022) for the detail). The center of mass for the matter with \ud835\udc4c\ud835\udc52< 0.1 is located in the direction of \ud835\udf19\u2248141\u25e6with \ud835\udf19being the longitudinal angle measured from the +\ud835\udc65axis. The longitudinal angles of the meridional planes shown in Figures 3 and 4 are selected to show the profiles in which the dynamical ejecta are approximately mostly (\u2018b)\u2019: \ud835\udf19\u2248156\u25e6), moderately (\u2018a)\u2019: \ud835\udf19\u224866\u25e6and \u2019c)\u2019: \ud835\udf19\u2248246\u25e6), and least (\u2018d)\u2019: \ud835\udf19\u2248336\u25e6) present. As we mentioned above, the entire ejecta have reached the homologously expanding phase at this epoch. Broadly speaking, the dynamical and post-merger ejecta are present around the regions where the cylindrical radius is larger and smaller than \u22480.05\u20130.1\ud835\udc50\ud835\udc61, respectively. Those two components are clearly distinguishable with the value of \ud835\udc4c\ud835\udc52. The value of \ud835\udc4c\ud835\udc52for the dynamical ejecta is typically below 0.1, which primarily reflects the original \ud835\udc4c\ud835\udc52values of the disrupted NS. On the other hand, the post-merger ejecta have wider range of \ud835\udc4c\ud835\udc52values from 0.1 to 0.4. The rest-mass density profile of the dynamical ejecta exhibits clear non-axisymmetric geometry, with its mass mostly distributed in the fan-like shape in 70\u25e6\u2272\ud835\udf19\u2272250\u25e6. The dynamical ejecta are extended up to \u22480.5\ud835\udc50\ud835\udc61in the cylindrical radius direction, while their vertical extent is \u22480.2\ud835\udc50\ud835\udc61. The aspect ratio of the cylindrical and vertical extents for the dynamical ejecta is close to unity. This is in contrast to the fact that the dynamical ejecta are launched initially confined around the equatorial plane within the latitudinal opening angle of \u223c 10\u25e6(Kyutoku et al. 2013; Foucart et al. 2014). As we show below, this ejecta expansion is due to thermal pressure enhanced by radioactive heating. On the other hand, the post-merger ejecta exhibit approximately an axisymmetric shape. It has two distinct components with one having approximately a spherical shape and the other having the cone-like shape. The former is concentrated in the region within \u22480.05\ud835\udc50\ud835\udc61while the latter is more extended in the vertical direction with the polar opening angle of \u224810\u25e6and the vertical extent reaches \u22480.25\ud835\udc50\ud835\udc61. As we show below, this complex geometry of the postmerger ejecta is realized by the interaction with the dynamical ejecta which significantly expand due to thermal pressure enhanced by radioactive heating. Figure 5 shows the rest-mass density and electron fraction profiles of ejecta on the equatorial and meridional planes at \ud835\udc61= 0.1 d obtained by the HD simulation but switching off radioactive heating. Under the presence of radioactive heating, the dynamical ejecta expand significantly due to the increase in thermal pressure and the inhomogeneities in the rest-mass density are also smoothed out, as clearly seen in Figures 2 and 3. These results are consistent with the finding of Rosswog et al. (2014); Grossman et al. (2014) in the context of BNSs, and of Fern\u00e1ndez et al. (2015); Darbha et al. (2021) in the context of BH-NSs. In fact, the resulting aspect ratio of the dynamical ejecta is found to be close to unity as the model H4 in Darbha et al. (2021). The radioactive heating rate of the dynamical ejecta in our model also agrees with that in Darbha et al. (2021) (see Figure 6). The comparison between Figure 4 and Figure 5 shows that the profile of the post-merger ejecta is affected by the modification of the dynamical eject profile. Figure 5 shows that, in the absence of radioactive heating, the post-merger ejecta exhibit a prolate shape with the extension of 0.1\ud835\udc50\ud835\udc61and 0.25\ud835\udc50\ud835\udc61in the equatorial and vertical directions, respectively. On the other hand, the radioactive heating significantly expands the dynamical ejecta, which compress the postmerger ejecta in 0.05\ud835\udc50\ud835\udc61\u2264\ud835\udc67\u22640.15\ud835\udc50\ud835\udc61and confine the ejecta in the region of \u22720.05\ud835\udc50\ud835\udc61as found in Figure 4. This happens because of the higher typical electron fraction of the post-merger ejecta. The higher electron fraction leads to the relatively small radioactive heating rate and hence small enhancement of the pressure of the ejecta compared to the dynamical component. Significant expansion of the dynamical ejecta and enforced confinement of the post-merger ejecta in the presence of radioactive heating are not found in the BNS models in our previous studies (Kawaguchi et al. 2021, 2022, 2023). This is because the dynamical ejecta of the present BH-NS model and the BNS models studied in Rosswog et al. (2014); Grossman et al. (2014) are massive compared to the post-merger ejecta and also much more confined around the equatorial plane compared to the BNS models in our previous studies. As a result, higher internal energy density and high thermal pressure are realized. Hence, the importance of the radioactive heating will depend on the density and isotopic-abundance profiles of ejecta, which can have a variety even among BH-NS mergers depending on the binary parameters or the adopted EOS. 5 RESULTS: KN LIGHT CURVES 5.1 bolometric light curves Figure 7 shows the bolometric luminosity calculated by the RT simulation employing the ejecta rest-mass density, elemental abundance, and radioactive heating rate profiles obtained by the combination of the results of the HD simulation and nucleosynthesis calculation (Wanajo et al. 2022). The total energy deposition rate taking the thermalization efficiency into account is also plotted in Figure 7. As we mentioned in Section 2, our atomic data include only up to the triple ionization for all the ions, and the opacity of ejecta in the early phase (\ud835\udc61\u22640.5 d) may be underestimated due to high temperature (\u227320, 000 K). Hence, hereafter, we only focus on the light curves after 1 d. For 1\u201310 d, the bolometric luminosity is approximately constant with the value of 1\u20132\u00d71041 erg/s. It decreases only slowly and the MNRAS 000, 1\u201320 (20XX) 6 K. Kawaguchi et al. Figure 2. Rest-mass density and electron fraction (\ud835\udc4c\ud835\udc52) profiles on the equatorial plane at \ud835\udc61= 0.1 d. The yellow dotted lines denote the angles for which the meridional ejecta profiles are shown in Figures 3 and 4. The white dotted curves denote the longitudinal angle ranges in which the KN light curves shown in Figures 8 and 9 are obtained. The value employed as the initial condition of the nucleosynthesis calculation is shown in the \ud835\udc4c\ud835\udc52profile (see Appendix B and Wanajo et al. (2022) for the detail.) change is only by a factor of 2 during this epoch. However, after 10 d, the bolometric luminosity starts decreasing more rapidly, and it decreases by a factor of 5 during 10\u201330 d. This faint and longlasting emission is caused by the fact that it is primarily powered by the lanthanide-rich dynamical ejecta, in which a long photon diffusion time scale is realized by the large mass and high opacity. This behaviour of the bolometric luminosity is qualitatively the same as that found in the models with massive dynamical ejecta studied in the previous study (MS1Q3a75 and H4Q3a75 in Tanaka et al. (2014)). The bolometric luminosity converges to the total deposition rate after 20 d, which suggests that the entire thermal photons created in the ejecta immediately diffuse out from the ejecta after this epoch. As we show below, however, the viewing-angle dependence of the emission due to the aspherical profile of the ejecta opacity is still playing a role up to 30 d. Our present BH-NS KN model shows significantly distinct light curves from the observation of AT2017gfo. Specifically, our BH-NS model is fainter by a factor of 2 around \u223c1 d than AT2017gfo. However, due to the slow decrease in the bolometric luminosity, compared to AT2017gfo, our BH-NS KN model becomes comparably bright at 4 d, and brighter by a factor of 5 at 10 d. This result clearly shows that a BH-NS binary which we study particularly in this work is not likely to be the progenitor of AT2017gfo. Figure 8 shows the results of the isotropically equivalent bolometric light curves observed from various viewing angles for the present model. Focusing on the observer in the polar direction with \ud835\udf03\u226428\u25e6, the KN emission is brightest in b): 135\u25e6\u2264\ud835\udf19< 180\u25e6. This direction approximately matches to the longitudinal direction in which the dynamical ejecta have the most of their mass (see Figures 2 and 3). On the other hand, the faintest emission is observed from the direction in which the dynamical ejecta least present (\u2018d)\u2019: 315\u25e6\u2264\ud835\udf19< 360\u25e6). Nevertheless, the variation in the bolometric luminosity is not so large and it is always within 40%. This is reasonable because the observers with different longitudinal angles are in similar directions for the polar view \ud835\udf03\u226428\u25e6. On the other hand, the longitudinal variation is larger for the equatorial view (82\u25e6\u2264\ud835\udf03\u226490\u25e6). For this case, the KN emission is also brightest in b): 135\u25e6\u2264\ud835\udf19< 180\u25e6in which direction the dynamical ejecta is most present. By contrast, the bolometric luminosity is the faintest in a): 45\u25e6\u2264\ud835\udf19< 90\u25e6. This is because, in this direction, the relatively thin part of the dynamical ejecta present in \ud835\udc45\u22730.3\ud835\udc50\ud835\udc61around the equatorial plane (see Figure 3) suppresses radiation from the ejecta center (note that such suppression is not significant from the direction of c): 225\u25e6\u2264\ud835\udf19< 270\u25e6due to the absence of the low density ejecta above \ud835\udc45\u22480.3\ud835\udc50\ud835\udc61). Meanwhile, the emission from the post-merger ejecta enhances the luminosity in this view (82\u25e6\u2264\ud835\udf03\u226490\u25e6) in which the dynamical ejecta are least present (\u2018d)\u2019: 315\u25e6\u2264\ud835\udf19< 360\u25e6). The variation in the bolometric luminosity is larger for the equatorial view than that for the polar view, and it is larger than a factor of 2 for 1\u20137 d. The latitudinal viewing-angle dependence of the bolometric luminosity is not significant and the variation in the bolometric luminosity is always less than a factor of 2 in our present model except for the equatorial view in a): 45\u25e6\u2264\ud835\udf19< 90\u25e6. The dependence of the KN brightness on the latitudinal direction is weak in particular from the viewing angle of b): 135\u25e6\u2264\ud835\udf19< 180\u25e6, in which direction the KN emission is brightest. As we show below, the latitudinal viewingangle dependence is much weaker than that for BNS KNe. This is due to the fact that, for the present BH-NS KN model, the emission is dominated by the dynamical ejecta of which the aspect ratio is close to unity. In fact, compared with a previous study (Tanaka et al. 2014), our present BH-NS KN model shows a less significant viewing-angle dependence on the latitudinal direction. This is because the ejecta in this study have larger aspect ratio than those in the models of their previous study. This comparison indicates that the modification of the ejecta morphology due to radioactive heating has a great impact on the viewing-angle dependence of the KN emission. Hence, this work demonstrates the importance of modeling KN light curves taking the ejecta long-term evolution into account. Interestingly, the latitudinal viewing-angle dependence is still present in d): 315\u25e6\u2264\ud835\udf19< 360\u25e6even after 20 d at which the total bolometric luminosity converges to the total deposition rate (see MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 7 Figure 3. Rest-mass density profiles on the meridional planes at \ud835\udc61= 0.1 d. The top left, top right, bottom left, and bottom right panels denote the profiles on the \ud835\udf19\u224866\u25e6, 156\u25e6, 246\u25e6, 336\u25e6planes, respectively (see also the left panel of Figure 2 for the location of each plane). \ud835\udc45denotes the cylindrical radius. Figure 7). This can be understood by the fact that the post-merger ejecta are present in between the high-density part of the dynamical ejecta and the observer in the direction of d): 315\u25e6\u2264\ud835\udf19< 360\u25e6 (see Figures 2 and 3). While the post-merger ejecta have a minor contribution to the luminosity in the late epoch due to their relatively low heating rate, they still contribute as an opacity source (because of the relatively high density) to prevent photons emitted in the high-density part of the dynamical ejecta diffusing out to the direction of d): 315\u25e6\u2264\ud835\udf19< 360\u25e6. Although it is not quantitatively significant, this long-lasting viewing-angle dependence due to the non-axisymmetric geometry of the ejecta might have an impact on estimating the total deposition rate in the ejecta from the late-time observation. 5.2 broad-band magnitudes Figure 9 shows the optical (the g and z bands) and near-infrared (the K-band) light curves observed from various viewing angles. As is the case for the bolometric luminosity, for the polar view (\ud835\udf03\u226428\u25e6), the gzK-bands are the brightest and the faintest in b): 135\u25e6\u2264\ud835\udf19< 180\u25e6 and in d): 315\u25e6\u2264\ud835\udf19\u2264360\u25e6, in which the dynamical ejecta are mostly and least present, respectively. The viewing-angle dependence of the emission is weak from the polar view, and the variation is always within 0.5 mag around the peak magnitudes. For the equatorial view (\ud835\udf03\u226528\u25e6), the gzK-band emission is the brightest in b): 135\u25e6\u2264\ud835\udf19< 180\u25e6, while the emission in a): 45\u25e6\u2264\ud835\udf19\u226490\u25e6becomes the faintest. The longitudinal viewingangle dependence of the emission is significant in the gz-bands, and it is always larger than 1 mag among different longitudinal directions. The variation in the K-band magnitude among different longitudinal directions is relatively small compared to that in the gz-bands, and it is always approximately within 1 mag among the all viewing angles. The g-band emission is fainter and declines faster than AT2017gfo even if it is observed from the brightest direction (\u2018b)\u2019: 135\u25e6\u2264 \ud835\udf19< 180\u25e6). The peak brightness in the z-band is comparable to the observation of AT2017gfo, but it declines much faster. In contrast, MNRAS 000, 1\u201320 (20XX) 8 K. Kawaguchi et al. Figure 4. Same as Figure 3 but for electron fraction \ud835\udc4c\ud835\udc52(see also the left panel of Figure 2 for the location of each plane). the K-band emission is comparable to that of AT2017gfo in a few days, and then, it becomes brighter after 4 d. The K-band magnitude finally reaches its peak at \u224810 d after the onset of the merger with its emission brighter than AT2017gfo by more than 1 mag. Interestingly, the gz-band emission observed from b): 135\u25e6\u2264\ud835\udf19< 180\u25e6becomes slightly brighter in the equatorial direction than in the polar direction after 1 d for the present BH-NS model. This brightness dependence on the latitudinal direction is opposite compared to the BNS KN models, for which the emission becomes brighter in the polar direction2. The same latitudinal-angle dependence is also found for the emission observed in the direction of the ejecta bulk motion for the model in Darbha et al. (2021) in which the radioactive heating rate agrees with our model (model H4). The brighter emission in the equatorial plane is explained by the enhancement of the radiation 2 However, it should be noted that here we do not consider the impact that the short GRB jet might have on the polar ejecta and on the KN emission (see Hamidani et al. (2024)). energy flux due to the Doppler effect induced by the bulk motion of the ejecta. It is also important to have the aspect ratio of the dynamical ejecta close to unity to realize the present latitudinalangle dependence of the emission brightness: otherwise the Doppler effect can be obscured by the suppression of the emission due to the decrease in the projected area toward the observer for the case that the dynamical ejecta have a more oblate shape (see Darbha et al. (2021) for the discussion). 5.3 radiative-transfer effect of non-axisymmetric ejecta geometry To clarify the RT effect of the non-axisymmetric ejecta geometry, we perform a RT simulation for an axisymmetrized ejecta profile. The axisymmetrized ejecta profile is generated by averaging over the rest-mass density, specific internal energy, elemental abundances, and radioactive heating rate profiles obtained by the HD simulation at \ud835\udc61= 0.1 d with respect to the longitudinal direction. Note that the volume and mass in each grid cell are used as the weights of MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 9 Figure 5. Rest-mass density and electron fraction (\ud835\udc4c\ud835\udc52) profiles on the equatorial plane at \ud835\udc61= 0.1 d for the HD simulation in which radioactive heating is turned off. the average for the rest-mass density and the latter three quantities, respectively. Figure 10 compares the isotropically equivalent bolometric luminosities and gzK-band light curves observed from the polar (0\u25e6\u2264\ud835\udf03\u226428\u25e6) and equatorial (82\u25e6\u2264\ud835\udf03\u226490\u25e6) directions between the axisymmetrized and fiducial models (the same as in Figures 8 and 9). For the fiducial model, the light curves observed from b): 135\u25e6\u2264\ud835\udf19< 180\u25e6, d): 315\u25e6\u2264\ud835\udf19< 360\u25e6(for the polar view), and a): 45\u25e6\u2264\ud835\udf19< 90\u25e6(for the equatorial view) are shown. The upper panels of Figure 10 show that the KN emission observed from the polar direction becomes slightly brighter for the axisymmetrized model than for the fiducial model except for the gband emission. This reflects the fact that the area of the dynamical ejecta projected toward the observer increases for the axisymmetrized model due to the longitudinal average. The bolometric light curves declines slightly earlier than the original fiducial model because the optical depth decreases due to the decrease in the rest-mass density of the dynamical ejecta for the axisymmetrized model. Nevertheless, the effect of the longitudinal average is found to be minor for the polar view, particularly, in the gzK-band magnitudes, for which the differences between the axisymmetrized model and fiducial model are always smaller than 0.5 mag for \ud835\udc61\u22650.5 d. The difference in the brightness between the axisymmetrized model and fiducial model is more pronounced for the emission observed from the equatorial direction. For the equatorial view, the bolometric luminosity observed from the longitudinal direction of the brightest region (\u2018b)\u2019: 135\u25e6\u2264\ud835\udf19< 180\u25e6) is brighter approximately by a factor of 1.5\u20132 for the fiducial model than that for the axisymmetrized model. The gzK-band magnitudes observed from the same direction are also brighter than those for the axisymmetrized model by \u223c1 mag. On the other hand, the KN brightness observed from the longitudinal direction of the faintest region (\u2018a)\u2019: 45\u25e6\u2264\ud835\udf19< 90\u25e6) is comparable to or slightly fainter for the fiducial model than that for the axisymmetrized model. This discrepancy in the equatorial brightness between the fiducial and axisymmetrized models is due to the difference in the ejecta aspect ratio: as a consequence of the longitudinal average, the polar projected area of the ejecta is larger MNRAS 000, 1\u201320 (20XX) 10 K. Kawaguchi et al. 1x1011 1x1012 1x1013 1x1014 1x1015 1x1016 1x1017 1x1018 1x1019 1x1020 0.01 0.1 1 10 100 1000 Our HD simulation H4 (Darbha et al. 2021) radioactive heating rate [erg/s/g] t [s] Figure 6. Mass weighted average of the total specific radioactive heating rate in our HD simulation. The specific heating rate of model H4 in Darbha et al. (2021) is also shown. 1x1040 1x1041 1x1042 1 10 Lbol Ldep AT2017gfo L [erg/s] t [day] Figure 7. Total bolometric luminosity and total energy deposition rate for model Q4B5H. The isotropically equivalent bolometric luminosity observed in AT2017gfo with the distance of 40 Mpc is shown by the filled circles adopting the data in Waxman et al. (2018), which assume a black-body fit to the photo-metric observations. Note that the bolometric light curve before 1 d is hidden since it is not reliable due to the lack of opacity data in the high temperature regime (\u227320, 000 K). than that in the equatorial direction for the axisymmetrized model, which makes photons to preferentially diffuse in the polar direction and thus the equatorial brightness to be fainter. 5.4 Comparison with various BNS KN models Figure 11 compares the gzK-band light curves among the present BH-NS KN model and various BNS KN models obtained in our previous studies (Kawaguchi et al. 2021, 2022, 2023). For BNS KN models, three cases are shown as representative: a case in which the remnant massive NS (MNS) survives for a short time (the dashed curves; SFHo-125145, Kiuchi et al. (2022); Fujibayashi et al. (2023); Kawaguchi et al. (2023)), a case in which the remnant MNS survives for a long time (the dash-dot curves; DD2-135135, Fujibayashi et al. (2020b); Kawaguchi et al. (2022)), and a case in which large-scale magnetic field significantly plays a role in the long-surviving remnant MNS (the dotted curves; MNS75a, Shibata et al. (2021); Kawaguchi et al. (2022)). We note that the BNS KN models are obtained by imposing axisymmetry in all the post-merger NR simulations, subsequent HD simulations, and RT simulations. For the BH-NS model, we show the light curves observed from b): 135\u25e6\u2264\ud835\udf19< 180\u25e6, d): 315\u25e6\u2264\ud835\udf19< 360\u25e6(for the polar view), and a): 45\u25e6\u2264\ud835\udf19< 90\u25e6(for the equatorial view), which represent the longitudinal directions for the brightest and faintest emission, respectively. The gz-band emission observed from the polar direction (0\u25e6\u2264\ud835\udf03\u2264 20\u25e6) for the present BH-NS KN model is by 0.5\u20131 mag brighter than that for the BNS models in which the remnant MNS survives only for a short time (< 10 ms, SFHo-125145), but is by \u22481 mag fainter than that for the BNS models in which the remnant MNSs survive for a long time (> 1 s, DD2-135135 and MNS75a). On the other hand, the gz-band emission observed from the equatorial direction (86\u25e6\u2264\ud835\udf03\u226490\u25e6) is comparably bright or brighter than those for the BNS models in which remnant MNSs survive for a long time except for the z-band emission of model MNS75a. This is due to the fact that the BNS KN models show stronger latitudinal viewing-angle dependence than the BH-NS KN model and become significantly faint in the equatorial view. The difference in the latitudinal viewingangle dependence reflects the fact that the dynamical ejecta are the primary source of the emission in the optical wavelength for the BHNS model, while for the BNS models, the post-merger ejecta are main source of the emission and the dynamical ejecta are mostly acting as the opacity source rather than the emission source (lanthanide curtain effect; Kasen et al. 2015; Kawaguchi et al. 2018; Kawaguchi et al. 2020; Bulla 2019; Zhu et al. 2020; Darbha & Kasen 2020; Korobkin et al. 2021). We note that, in the BNS cases, enhancement of the brightness due to the Doppler effect is obscured by the latitudinalangle dependence of the emission induced by the angle-dependent opacity of the dynamical ejecta. The K-band emission for the present BH-NS KN model has comparable peak brightness to that for the BNS models without significant large-scale magnetic field effect in the remnant NS (SFHo-125145 and DD2-135135). However, it is only the BH-NS model that maintains the K-band brightness within 1 mag of its peak for a twoweek period. The BNS models in which the remnant MNSs survive for short and long periods of time become fainter than the BH-NS model after 1\u20132 d and 5\u20137 d, respectively. The BNS model in which large-scale magnetic field significantly plays a role in the remnant NS shows bright K-band emission for a week, but the brightness declines much faster than that for the BH-NS model. Hence, the observation of a KN with long-lasting near-infrared emission which is bright for more than two weeks will indicate that the progenitor of a KN is a BH-NS merger with massive ejecta (in particular dynamical ejecta) formation. 6 SUMMARY AND DISCUSSIONS In this paper, we studied the long-term evolution of the matter ejected in a BH-NS merger by employing the results of the NR simulation and nucleosynthesis calculation, in which both dynamical and postmerger ejecta are followed consistently. In particular, we employed the results for the merger of a 1.35 \ud835\udc40\u2299NS and 5.4 \ud835\udc40\u2299BH with the MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 11 1x1040 1x1041 1x1042 1 10 a) \u03c6=45-90[deg] b) \u03c6=135-180[deg] c) \u03c6=225-270[deg] d) \u03c6=315-360[deg] AT2017gfo L [erg/s] t [day] 0\u00b0\u2264\u03b8<28\u00b0 1x1040 1x1041 1x1042 1 10 a) \u03c6=45-90[deg] b) \u03c6=135-180[deg] c) \u03c6=225-270[deg] d) \u03c6=315-360[deg] AT2017gfo L [erg/s] t [day] 82\u00b0\u2264\u03b8<90\u00b0 1x1040 1x1041 1x1042 1 10 \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AT2017gfo L [erg/s] t [day] a) 45\u00b0\u2264\u03c6<90\u00b0 1x1040 1x1041 1x1042 1 10 \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AT2017gfo L [erg/s] t [day] b) 135\u00b0\u2264\u03c6<180\u00b0 1x1040 1x1041 1x1042 1 10 \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AT2017gfo L [erg/s] t [day] c) 225\u00b0\u2264\u03c6<270\u00b0 1x1040 1x1041 1x1042 1 10 \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AT2017gfo L [erg/s] t [day] d) 315\u00b0\u2264\u03c6<360\u00b0 Figure 8. Isotropically equivalent bolometric luminosities observed from various viewing angles for model Q4B5H. The top panels denote the comparison among the results for different longitudinal directions, while the middle and bottom panels denote the comparison among the results for different latitudinal directions. The isotropically equivalent bolometric luminosity observed in AT2017gfo with the distance of 40 Mpc is also shown by the filled circles adopting the data in Waxman et al. (2018), which assume a black-body fit to the photo-metric observations. Note that the light curves before 1 d are hidden since they are not reliable due to the lack of opacity data in the high temperature regime (\u227320, 000 K). MNRAS 000, 1\u201320 (20XX) 12 K. Kawaguchi et al. 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K a) \u03c6=45-90[deg] b) \u03c6=135-180[deg] c) \u03c6=225-270[deg] d) \u03c6=315-360[deg] AB magnitude t [day] D=40 Mpc, 0\u00b0\u2264\u03b8<28\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K a) \u03c6=45-90[deg] b) \u03c6=135-180[deg] c) \u03c6=225-270[deg] d) \u03c6=315-360[deg] AB magnitude t [day] D=40 Mpc, 82\u00b0\u2264\u03b8<90\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AB magnitude t [day] D=40 Mpc, a) 45\u00b0\u2264\u03c6<90\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AB magnitude t [day] D=40 Mpc, b) 135\u00b0\u2264\u03c6<180\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AB magnitude t [day] D=40 Mpc, c) 225\u00b0\u2264\u03c6<270\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K \u03b8=0-28[deg] \u03b8=41-51[deg] \u03b8=82-90[deg] AB magnitude t [day] D=40 Mpc, d) 315\u00b0\u2264\u03c6<360\u00b0 Figure 9. gzK-band light curves for model Q4B5H observed from various viewing angles with the distance of 40 Mpc. The top panels denote the comparison among the results for different longitudinal directions, while the middle and bottom panels denote the comparison among the results for different latitudinal directions. The data points denote the AB magnitudes of AT2017gfo taken from Villar et al. (2017). MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 13 1x1040 1x1041 1x1042 1 10 2D model b) \u03c6=135-180[deg] d) \u03c6=315-360[deg] L [erg/s] t [day] 0\u00b0\u2264\u03b8<28\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K 2D model b) \u03c6=135-180[deg] d) \u03c6=315-360[deg] AB magnitude t [day] D=40 Mpc, 0\u00b0\u2264\u03b8<28\u00b0 1x1040 1x1041 1x1042 1 10 2D model b) \u03c6=135-180[deg] a) \u03c6=45-90[deg] L [erg/s] t [day] 82\u00b0\u2264\u03b8<90\u00b0 16 17 18 19 20 21 22 23 0.5 1 2 3 4 5 7 10 14 20 g z K 2D model b) \u03c6=135-180[deg] a) \u03c6=45-90[deg] AB magnitude t [day] D=40 Mpc, 82\u00b0\u2264\u03b8<90\u00b0 Figure 10. Comparison of the isotropically equivalent bolometric luminosities (left) and gzK-band light curves (right) between the fiducial (the same as in Figures 8 and 9) and axisymmetrized (labeled as \u201c2D model\u201d) models. The top and bottom panels denote the light curves observed from 0\u25e6\u2264\ud835\udf03\u226428\u25e6and 82\u25e6\u2264\ud835\udf03\u226490\u25e6, respectively. The solid and dashed curves denote the light curves of the axisymmetrized model and those for the fiducial model observed from b): 135\u25e6\u2264\ud835\udf19< 180\u25e6, respectively. The dotted curves in the upper and bottom panels denote the light curves of the fiducial model observed from d): 315\u25e6\u2264\ud835\udf19< 360\u25e6and a): 45\u25e6\u2264\ud835\udf19< 90\u25e6, respectively. dimensionless spin of 0.75. We confirmed the finding in the previous studies that, thermal pressure induced by radioactive heating in the ejecta could significantly modify the morphology of the ejecta. In our studied case of a BH-NS binary, the dynamical ejecta expand significantly and the aspect ratio becomes close to unity with the fine structure being smeared out in the presence of radioactive heating. On the other hand, the post-merger ejecta were compressed and confined in the region with the radial velocity \u22720.05 \ud835\udc50due to the significant expansion of the dynamical component. We then computed the KN light curves employing the ejecta profile obtained by the HD simulation of the ejecta matter. We found that our present BH-NS model results in KN light curves that are fainter but longer lasting than those observed in AT2017gfo, reflecting the fact that the emission is primarily powered by the lanthanide-rich massive dynamical ejecta. The optical-band emission is comparable to or fainter than those for the various BNS models obtained in our previous studies. While the peak brightness of the near-infrared emission is also comparable to the BNS models, the time-scale maintaining the brightness is much longer, and the emission comparable to the peak brightness within 1 mag is sustained for more than two weeks for the BH-NS model. The wide-field infrared observations with the ground-based telescopes, such as VISTA (Ackley et al. 2020), WINTER (Frostig et al. 2022) and PRIME (Kondo et al. 2023), can detect such bright infrared KN emission up to \u224814 d if the distance to the event is within 150 Mpc since the K-band emission will be apparently brighter than 21 mag for all the viewing angles. However, the field of views of infrared telescopes are typically not as large as those for the optical telescopes for a given sensitivity (Nissanke et al. 2013). Therefore, a tight constraint of the localization area by the GW data analysis or the follow-up observation within \u22481 d in the optical MNRAS 000, 1\u201320 (20XX) 14 K. Kawaguchi et al. -19 -18 -17 -16 -15 -14 -13 -12 -11 0.5 1 2 3 4 5 7 10 16 20 14 15 16 17 18 19 20 21 22 DD2-135135 SFHo-125145 MNS75a b) \u03c6=135-180 [deg] d) \u03c6=315-360 [deg] AB absolute magnitude AB apparent magnitude [40 Mpc] t [day] g-band; 0\u00b0\u2264\u03b8<20\u00b0 -19 -18 -17 -16 -15 -14 -13 -12 -11 0.5 1 2 3 4 5 7 10 16 20 14 15 16 17 18 19 20 21 22 DD2-135135 SFHo-125145 MNS75a b) \u03c6=135-180 [deg] a) \u03c6=45-90 [deg] AB absolute magnitude AB apparent magnitude [40 Mpc] t [day] g-band; 86\u00b0\u2264\u03b8<90\u00b0 -19 -18 -17 -16 -15 -14 -13 -12 -11 0.5 1 2 3 4 5 7 10 16 20 14 15 16 17 18 19 20 21 22 DD2-135135 SFHo-125145 MNS75a b) \u03c6=135-180 [deg] d) \u03c6=315-360 [deg] AB absolute magnitude AB apparent magnitude [40 Mpc] t [day] z-band; 0\u00b0\u2264\u03b8<20\u00b0 -19 -18 -17 -16 -15 -14 -13 -12 -11 0.5 1 2 3 4 5 7 10 16 20 14 15 16 17 18 19 20 21 22 DD2-135135 SFHo-125145 MNS75a b) \u03c6=135-180 [deg] a) \u03c6=45-90 [deg] AB absolute magnitude AB apparent magnitude [40 Mpc] t [day] z-band; 86\u00b0\u2264\u03b8<90\u00b0 -19 -18 -17 -16 -15 -14 -13 -12 -11 0.5 1 2 3 4 5 7 10 16 20 14 15 16 17 18 19 20 21 22 DD2-135135 SFHo-125145 MNS75a b) \u03c6=135-180 [deg] d) \u03c6=315-360 [deg] AB absolute magnitude AB apparent magnitude [40 Mpc] t [day] K-band; 0\u00b0\u2264\u03b8<20\u00b0 -19 -18 -17 -16 -15 -14 -13 -12 -11 0.5 1 2 3 4 5 7 10 16 20 14 15 16 17 18 19 20 21 22 DD2-135135 SFHo-125145 MNS75a b) \u03c6=135-180 [deg] a) \u03c6=45-90 [deg] AB absolute magnitude AB apparent magnitude [40 Mpc] t [day] K-band; 86\u00b0\u2264\u03b8<90\u00b0 Figure 11. Comparison of the bolometric and gzK-band light curves among the present BH-NS KN model and various BNS KN models. For BNS KN models, three cases are shown: a case in which the remnant MNS survives for a short time (the dashed curves; SFHo-125145, Kiuchi et al. (2022); Fujibayashi et al. (2023); Kawaguchi et al. (2023)), a case in which the remnant MNS survives for a long time (the dash-dotted curves; DD2-135135, Fujibayashi et al. (2020b); Kawaguchi et al. (2022)), and a case in which large-scale magnetic field significantly plays a role in the long-surviving remnant MNS (the dotted curves; MNS75a, Shibata et al. (2021); Kawaguchi et al. (2022)). Note that the light curve for model SFHo-125145 in the top-right panel is below the plot range. MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 15 bands is crucial to detect the KN emission unless the event occurs as close as in the case of AT2017gfo. Once a KN with long-lasting near-infrared emission is found, follow-up observations in the radio band may also be useful to support the presence of massive dynamical ejecta, by finding the synchrotron radio flares with the relatively delayed peak time of \u223c10 yr (Kyutoku et al. 2013). We found that the non-axisymmetric geometry of the ejecta induces various interesting radiative-transfer effects in the viewingangle dependence of the KN emission. In particular, we found the Doppler effect induced by the bulk velocity of the ejecta to the emission, which is pointed out by Fern\u00e1ndez et al. (2017) and Darbha et al. (2021), is in fact present. Due to this effect, the optical light curves observed from the direction of the bulk ejecta motion show a slightly inverted latitudinal angle dependence to those found in the BNS models: The optical-band emission observed from b): 135\u25e6\u2264\ud835\udf19< 180\u25e6 becomes slightly brighter in the equatorial direction than in the polar direction for the present BH-NS model. Since the KNe emission becomes fainter in the equatorial direction than in the polar direction for BNS mergers, our results suggest that, for the edge-on view, the KN emission for BH-NS mergers can be brighter in the optical-band than that of BNS mergers. Our results indicate that the long-lasting near-infrared emission is the key to distinguish the types of progenitors by the KN observation. If the K-band emission of which brightness comparable to its peak is maintained for more than two weeks, it may indicate that the progenitor is a BH-NS merger with massive ejecta formation. This is consistent with our finding in the previous study (Kawaguchi et al. 2020). On the other hand, only from the optical emission, the BH-NS KN light curves can be similar to those associated with BNS mergers, and hence, it may be difficult to infer the information of the progenitor. We should note that the ejecta mass and hence the brightness of the KN of BH-NS mergers can have large variety depending on the binary parameters, such as the BH and NS masses, BH spin, and NS radius (Rosswog 2005; Shibata & Taniguchi 2008; Etienne et al. 2009; Lovelace et al. 2013; Kyutoku et al. 2015; Foucart et al. 2018), as well as on the adopted EOS (Hayashi et al. 2023). We also note that the assumption of LTE employed in our RT simulation will not be valid in the region where the rest-mass density has been significantly dropped. It is implied in Hotokezaka et al. (2021); Pognan et al. (2022) that the matter temperature can be higher than that estimated under the assumption of LTE if non-LTE effects take place. In such a case, the infrared emission can be dimmer with the combination of the suppression of the neutral and poorly ionized atoms (Kawaguchi et al. 2021, 2022, 2023). Hence, we should note that the absence of the long-lasting bright near-infrared emission does not necessarily rule out the possibility that the progenitor of the observed KN is a BH-NS merger. So far, four candidates have been reported for BH-NS GW events: GW190814 (Abbott et al. 2020), GW200105/GW200115 (Abbott et al. 2021), and GW230529 (The LIGO Scientific Collaboration et al. 2024a). Among them, according to the inferred masses and spins of the binary, the latest GW event, GW230529, was most likely to be accompanied by EM counterparts. Unfortunately, the EM counterpart was not found in GW230529 due to the poorly constrained sky localization, although the luminosity distance to the event was relatively close (201+102 \u221296 Mpc with the error bar being the 90% credible intervals). Nevertheless, the discovery of this system increases the expected rate of the GW detection of a BH-NS merger with EM counterparts in the future. For this event, (under the assumption that this event was a BH-NS merger) the ratio of the BH mass to the NS mass and the dimensionless BH spin were less than \u22484 and larger than \u22480, respectively. For such a case of a BH-NS merger, the ratio of the post-merger ejecta mass to the dynamical one can be larger compared to the BH-NS model which we studied in this paper (Hayashi et al. 2021). This indicates that the resulting KN may become bluer than the present result, while it is not always trivial since the long-term hydrodynamics evolution of ejecta may also differ. Hence, the systematic study on the KNe for various configurations of BH-NS binaries would be crucial to quantitatively interpret the EM observational data in the future. There are a number of KN candidates reported which are associated with the observation of GRBs: GRB050709 (Jin et al. 2016), GRB060614 (Jin et al. 2015; Yang et al. 2015), GRB130603B (Berger et al. 2013; Tanvir et al. 2013), GRB160821B (Lamb et al. 2019; Troja et al. 2019), GRB211211A (Rastinejad et al. 2022; Troja et al. 2022; Gompertz et al. 2023), and GRB230307A (Levan et al. 2024). In Figure 12, we compare our present BH-NS KN model with these observational data. The optical and near-infrared brightness of KN candidates found in GRB211211A and GRB230311 are comparable to that of AT2017gfo. We find that our present BH-NS KN model is too faint to explain the optical brightness of these KN candidates at a few days, while the K-band emission of our present BH-NS KN model after 4 d is too bright to be consistent with the later time upper limits. Our present BH-NS KN model is also too faint in the optical bands to explain the KN candidates found in GRB050709, GRB060614, and GRB160821B after \u22482 d. The K-band brightness of GRB160821B at 4.3 d is comparable to that of our present BHNS KN model, while the BNS model in which the remnant MNS survives for a long time (> 1 s, DD2-135135 in Figure 11) also has comparable K-band brightness at that epoch. Interestingly, despite the bright and long-lasting K-band emission, our present BH-NS model has fainter H-band emission at \u223c10 d than that observed in GRB130603B. This is due to the fact that the KN of our present BH-NS model is very red and the peak of the red-shifted spectrum is located in the wavelength longer than the H-band at that epoch. In summary, currently we do not find a KN candidate that can be only explained by our present BH-NS model with significant dynamical ejecta formation. However, as we mentioned above, we cannot rule out the possibility that some of those KN candidates are KNe associated with BH-NS mergers, since BH-NS KNe can have a large diversity reflecting the variety of binary parameters as well as the adopted EOS. We found more than a factor of 2 variation in the KN brightness depending on the viewing angle in our present BH-NS model. Such a variation in the brightness can induce the same degree of the systematic error in conducting the ejecta parameter estimation for the ejecta properties, such as the mass, velocity, and effective ejecta opacity, from the observational data. We should also note that we only focus on one single case of a BH-NS merger with the DD2 EOS, and it is not clear whether the property of KN is always the same for other setups of BH-NS mergers. For example, if the longitudinal opening angle of the ejecta is close to 2\ud835\udf0b, the BH-NS KN can have the viewing-angle dependence in the brightness comparably strong to those of BNS mergers as we indeed see in the results of the axisymmetrized model (see Fig. 10). We further should note that uncomprehended systematic errors in the opacity and heating rate can induce large systematic errors in the ejecta parameter inference. In particular, the latter can be severe for KNe from BH-NS mergers since the uncertainty is more significant for the ejecta with low values of \ud835\udc4c\ud835\udc52(< 0.24, see Barnes et al. (2021); Zhu et al. (2021)). Hence, it is essential to consider that these systematic errors can significantly alter the results of the ejecta parameter inference, and those estimated values should be used with a great caution. MNRAS 000, 1\u201320 (20XX) 16 K. Kawaguchi et al. 21 22 23 24 25 26 27 28 1 10 V R I Apparent magnitude [Vega] t [day] GRB050709 19 20 21 22 23 24 25 26 27 28 1 10 V R I Apparent magnitude [Vega] t [day] GRB060614 23 24 25 26 27 28 29 1 10 r H Apparent magnitude [AB] t [day] GRB130603B 22 23 24 25 26 27 28 29 1 10 g r i z H K Apparent magnitude [AB] t [day] GRB160821B 20 21 22 23 24 25 26 1 10 g r i J K Apparent magnitude [AB] t [day] GRB211211A 19 20 21 22 23 24 25 26 27 28 1 10 g r i z K Apparent magnitude [AB] t [day] GRB230307A Figure 12. Comparison between the present BH-NS KN model and GRB KN candidates. The solid and dashed curves denote the polar light curves in the observer frame (0\u25e6\u2264\ud835\udf03\u226428\u25e6) for the present BH-NS KN model observed from b): 135\u25e6\u2264\ud835\udf19< 180\u25e6and d): 315\u25e6\u2264\ud835\udf19< 360\u25e6, respectively. The square and triangle symbols denote, respectively, the observed magnitudes and upper-limits of the optical and near-infrared counterparts of GRBs taken from Jin et al. (2016, 2015); Yang et al. (2015); Berger et al. (2013); Tanvir et al. (2013); Lamb et al. (2019); Troja et al. (2019); Rastinejad et al. (2022); Troja et al. (2022); Gompertz et al. (2023); Levan et al. (2024). The afterglow models which broadly reproduce the models in the literature are also plotted in the dotted curves. MNRAS 000, 1\u201320 (20XX) end-to-end simulation for KN emission from BH-NS merger 17 ACKNOWLEDGEMENTS Numerical computation was performed on Yukawa21 at Yukawa Institute for Theoretical Physics, Kyoto University and the Yamazaki, Sakura, Cobra, and Raven clusters at Max Planck Computing and Data Facility, and the Cray XC50 at CfCA of the National Astronomical Observatory of Japan. ND acknowledges support from Graduate Program on Physics for the Universe (GP-PU) at Tohoku University. This work was supported by Grant-in-Aid for Scientific Research of JSPS/MEXT (20H00158, 21H04997, 22KJ0317, 23H00127, 23H04894, 23H04900, 23H05432, and 23H01172) and JST FOREST Program (JPMJFR212Y). DATA AVAILABILITY Data and results underlying this article will be shared on reasonable request to the corresponding author." + } + ] +} \ No newline at end of file