| { |
| "url": "http://arxiv.org/abs/2404.16668v1", |
| "title": "The First Estimation of the Ambipolar Diffusivity Coefficient from Multi-Scale Observations of the Class 0/I Protostar, HOPS-370", |
| "abstract": "Protostars are born in magnetized environments. As a consequence, the\nformation of protostellar disks can be suppressed by the magnetic field\nefficiently removing angular momentum of the infalling material. Non-ideal MHD\neffects are proposed to as one way to allow protostellar disks to form. Thus,\nit is important to understand their contributions in observations of\nprotostellar systems. We derive an analytical equation to estimate the\nambipolar diffusivity coefficient at the edge of the protostellar disk in the\nClass 0/I protostar, HOPS-370, for the first time, under the assumption that\nthe disk radius is set by ambipolar diffusion. Using previous results of the\nprotostellar mass, disk mass, disk radius, density and temperature profiles and\nmagnetic field strength, we estimate the ambipolar diffusivity coefficient to\nbe $1.7^{+1.5}_{-1.4}\\times10^{19}\\,\\mathrm{cm^{2}\\,s^{-1}}$. We quantify the\ncontribution of ambipolar diffusion by estimating its dimensionless\nEls\\\"{a}sser number to be $\\sim1.7^{+1.0}_{-1.0}$, indicating its dynamical\nimportance in this region. We compare to chemical calculations of the ambipolar\ndiffusivity coefficient using the Non-Ideal magnetohydrodynamics Coefficients\nand Ionisation Library (NICIL), which is consistent with our results. In\naddition, we compare our derived ambipolar diffusivity coefficient to the\ndiffusivity coefficients for Ohmic dissipation and the Hall effect, and find\nambipolar diffusion is dominant in our density regime. These results\ndemonstrate a new methodology to understand non-ideal MHD effects in\nobservations of protostellar disks. More detailed modeling of the magnetic\nfield, envelope and microphysics, along with a larger sample of protostellar\nsystems is needed to further understand the contributions of non-ideal MHD.", |
| "authors": "Travis J. Thieme, Shih-Ping Lai, Yueh-Ning Lee, Sheng-Jun Lin, Hsi-Wei Yen", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "astro-ph.SR", |
| "cats": [ |
| "astro-ph.SR", |
| "astro-ph.GA" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "The First Estimation of the Ambipolar Diffusivity Coefficient from Multi-Scale Observations of the Class 0/I Protostar, HOPS-370", |
| "main_content": "INTRODUCTION It has long been known that magnetic fields (B-fields) play a critical role in regulating the formation of protostellar disks around low-mass protostars (e.g., Mestel & Spitzer 1956). Molecular cloud cores are observed to be strongly magnetized, with normalized mass-to-flux ratios of \u00b5 \u223c2 \u221210 (e.g., Crutcher 1999; Troland & Corresponding author: Travis J. Thieme tjthieme@asiaa.sinica.edu.tw Crutcher 2008; Crutcher 2012). Early ideal magnetohydrodynamic (MHD) simulations show that rotationally supported disks (RSDs) could not form due to magnetic braking efficiently transferring angular momentum away from the collapsing central region in magnetized (\u00b5 \u226410) dense cores (e.g., Allen et al. 2003; Matsumoto & Tomisaka 2004; Banerjee & Pudritz 2006; Price & Bate 2007; Hennebelle & Fromang 2008; Mellon & Li 2008; Joos et al. 2012). However, observational studies revealed the presence of rotationally-supported Keplerian disks around several young, highly-embedded arXiv:2404.16668v1 [astro-ph.SR] 25 Apr 2024 \f2 Thieme et al. protostars (e.g., Tobin et al. 2012; Murillo et al. 2013; Lee et al. 2014; Yen et al. 2017; Ohashi et al. 2023). This contradiction between observations and simulations was coined the so-called \u201cMagnetic Braking Catastrophe\u201d and raised the fundamental question of how could these protostellar disks form in such magnetized environments? Non-ideal MHD effects, namely ambipolar diffusion (AD), Ohmic dissipation (OD) and the Hall effect (HE), have been suggested as one possible route to overcome magnetic braking and form a rotationally-supported protostellar disk (e.g., Inutsuka et al. 2010; Li et al. 2011; Braiding & Wardle 2012; Tomida et al. 2015; Wurster et al. 2016, 2019; Wurster & Lewis 2020; Wurster et al. 2021). These non-ideal MHD terms describe the various regimes of coupling of the ions, electrons and charged grains to the magnetic field, as well as their interactions with the neutral particles (e.g., Wardle & Ng 1999; Nakano et al. 2002, see the recent reviews by Wurster & Li 2018; Zhao et al. 2020b; Tsukamoto et al. 2023). In terms of relative importance, Ohmic dissipation is efficient at high densities, such as the midplane of a protostellar disk, while the Hall effect and ambipolar diffusion are more efficient at intermediate and low densities, respectively, such as the upper disk layers and in the protostellar envelope (e.g., Marchand et al. 2016; Wurster et al. 2018a; Wurster 2021). However, the Hall effect seems to be transient does not last for long after the formation of a protostellar disk (Zhao et al. 2020b; Lee et al. 2021b). While simulations clearly show the importance of non-ideal MHD effects in the formation and evolution of protostellar disks, they have yet to be quantified observationally. Yen et al. (2018) attempted to observe the velocity drift between ions and neutral particles (ambipolar diffusion) in the infalling envelope of a young Class 0 protostar, B335. However, no velocity drift was detected and thus, it is important to look into other possibilities on how non-ideal MHD effects can be quantified observationally. In this paper, we aim to understand the role of ambipolar diffusion in protostellar disk formation by using a methodology first developed by Hennebelle et al. (2016), and later revisited by Lee et al. (2021b, 2024). This methodology leads to an analytical equation describing the expected protostellar properties, in particular the protostellar disk radius, due to ambipolar diffusion (Hennebelle et al. 2016). The disk radius estimated with this analytical equation (RAD) was found to be in good agreement with the disk radius estimated from MHD simulations (Rsim), with Rsim/RAD \u223c1 (Hennebelle et al. 2016, 2020; Commer\u00b8 con et al. 2022). Thus, by backwards engineering the equation, we can estimate the ambipolar diffusivity coefficient, \u03b7AD, from observable quantities under certain assumptions. Using multiscale observations of the young protostar, HOPS-370, we present a methodology to estimate the ambipolar diffusivity coefficient for the first time, in order to understand the role of ambipolar diffusion in the formation and evolution of protostellar disks. HOPS-370 is a Class 0/I protostar in the Orion A molecular cloud (D= 392.8 pc; Tobin et al. 2020a). Observations from the Herschel Orion Protostar Survey (HOPS) constrain the bolometric luminosity (Lbol) and temperature (Tbol) to be 314 L\u2299and 71.5 K, respectively (Furlan et al. 2016). The protostellar mass and disk properties were extensively studied by Tobin et al. (2020b) as part of the VLA/ALMA Nascent Disk and Multiplicity (VANDAM) Survey of Orion Protostars. By using MCMC radiative transfer modeling to fit the dust continuum and several molecular lines, they found an average disk radius of 94 au, an average protostellar mass of 2.5 M\u2299, and a disk mass of 0.035 M\u2299. More recently, Kao & Yen et al. (in prep.) have derived the core-scale plane-of-sky magnetic field strength to be Bpos = 0.51 mG. The combination of these derived properties make HOPS-370 an ideal candidate for an initial study on the role ambipolar diffusion plays in this source using this new methodology. This paper is organized as follows. In Section 2, we describe our methodology and assumptions to estimate \u03b7AD at the edge of the HOPS-370 protostellar disk. Our resulting value of \u03b7AD and a comparison with a more theoretical non-ideal MHD estimate is given in Section 3. Several implications and uncertainties are discussed in Section 4. Section 5 summarizes our main results and discussions. 2. METHODS 2.1. The Relation Between Protostellar Disk Properties and the Ambipolar Diffusivity Coefficient Here, we present an analytical equation that relates properties of the protostellar disk at the disk-envelope interface to the ambipolar diffusion coefficient. Hennebelle et al. (2016) were the first to derive such an equation, however they make a number of simplifications to remove terms related to the density and temperature, which differs from the modeling of HOPS-370. A derivation is provided in Appendix A, while a summary and overview is presented here. The main assumptions in this derivation are that 1. ambipolar diffusion is the main diffusion process, \fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 3 2. the angular momentum is counteracted by magnetic braking resulting in the advection and braking timescales to be of the same order, 3. the toroidal field generated by differential rotation is offset by the ambipolar diffusion in the vertical direction resulting in the Faraday induction and vertical diffusion timescales to be of the same order, 4. infalling and rotational velocities of the gas near the disk edge both scale with the Keplerian velocity, and 5. the gas is in vertical hydrostatic equilibrium. These assumptions are likely valid in HOPS-370, as discussed in Section 4.1. Under these assumptions, we derive a relationship between the ambipolar diffusivity coefficient and observable quantities of \u03b7AD \u2243 \u03b4r\u03b42 \u03d5G1/2C2 sR1/2 d (M\u22c6+ Md)1/2\u03c1 B2 \u03d5 , (1) where \u03b4r and \u03b4\u03d5 are scaling factors for the infall and rotational velocities, G is the gravitational constant, Cs is the isothermal sound speed, Rd is the disk radius, M\u22c6+ Md is the mass of the star+disk system, \u03c1 is the density at the disk-envelope interface and B\u03d5 is the toroidal (azimuthal) component of the magnetic field strength at the edge of the disk. As shown in Appendix B, the global magnetic field inclination with respect to the disk rotation axis has little effect on the predicted ambipolar diffusivity coefficient. Thus, this prescription should be considered generally valid regardless of the global magnetic field orientation. To simplify the use of our equation, we select several arbitrary normalization constants to give \u03b7AD \u22432.5 \u00d7 1017 cm2 s\u22121 \u0000\u03b4r\u03b42 \u03d5 \u0001 \u00d7 \u0012 Cs 200 m s\u22121 \u00132 \u0012 Rd 100 au \u00131/2 \u0012M\u22c6+ Md 0.1 M\u2299 \u00131/2 \u00d7 \u0012 \u03c1d 1.8 \u00d7 10\u221215 g cm\u22123 \u0013 \u0012 B\u03d5 20 mG \u0013\u22122 . (2) In addition, a common normalization used in numerical simulations is to multiply by 4\u03c0/c2, which was used by Hennebelle et al. (2016) in their derivation to give \u03b7AD in units of seconds. This normalization produces a relation of \u03b7AD \u22430.0035 s \u0000\u03b4r\u03b42 \u03d5 \u0001 \u00d7 \u0012 Cs 200 m s\u22121 \u00132 \u0012 Rd 100 au \u00131/2 \u0012M\u22c6+ Md 0.1 M\u2299 \u00131/2 \u00d7 \u0012 \u03c1d 1.8 \u00d7 10\u221215 g cm\u22123 \u0013 \u0012 B\u03d5 20 mG \u0013\u22122 , (3) which will also be used in later comparisons. As shown by Hennebelle et al. (2016), Hennebelle et al. (2020) and Commer\u00b8 con et al. (2022), the ratio of the disk radius measured in their numerical simulations (Rsim) to the theoretical disk radius predicted by their ambipolar diffusivity equation (RAD) was Rsim/RAD \u223c1 (within a factor of \u22432\u22123) and did not vary considerably over the evolution of the protostellar disks in their simulations. Since our main assumptions are essentially the same, this should still hold true even for our new relation. This will be explored in more detail in a future paper. In the next sections, we describe each of the variables used for our estimate of the ambipolar diffusivity coefficient at the edge of the HOPS-370 protostellar disk for the first time. This estimation is only possible due to the extensive modeling of HOPS-370 and its surrounding environment from several different observational studies. 2.2. Previously Estimated Protostar+Disk Properties Tobin et al. (2020b) derived several important properties of the protostar and disk in HOPS-370. In this section, we describe their extensive molecular line modeling in the context of the relevant values needed for our ambipolar diffusivity coefficient estimation. 2.2.1. Protostellar Mass, Disk Mass and Disk Radius The protostellar masses and disk radii are derived from 12 independent molecular line fits (with a fixed temperature power-law index) using MCMC radiative transfer fitting. They found the best fitting protostellar mass to be between ranged between 1.8 M\u2299and 3.6 M\u2299, with an average protostellar mass of 2.5\u00b10.2 M\u2299. This protostellar mass is the dynamical mass obtained from the Keplerian profile in the line fits. For the disk radius, the best fits ranged between 70 au and 121 au, with an average radius of 94.4\u00b112.6 au. The uncertainties of these average values were determined by using the median-absolute deviation (MAD) of their 12 molecular line fits and scaling them to correspond to one standard deviation of the normal distribution. We adopt Rd = 94.4\u00b112.6 au and M\u22c6= 2.5\u00b10.2 M\u2299as the protostellar disk radius and protostellar mass, respectively. It is important to note, the Rd used for comparison to RAD \f4 Thieme et al. in the numerical simulations by Hennebelle et al. (2016) is defined by several conditions using an azimuthallyaveraged simulation snapshot: (1) the disk is Keplerian meaning the azimuthal velocity is much greater than the radial velocity, (2) the disk is near hydrostatic equilibrium meaning the azimuthal velocity is much greater than the vertical velocity, (3) the disk is rotationallysupported meaning the rotational energy is larger than the support from thermal pressure by some factor, (4) the disk should be near the equitorial plane, and (5) a density threshold of n > 109 cm\u22123 (Joos et al. 2012). We have assumed the best-fit gas disk radius is equal to this radius. This is further explored in Section 3.3.3. The radius of the dust disk also modeled by Tobin et al. (2020b), however the dust is potentially more prone to radial drift and/or optical depth effects (e.g., Facchini et al. 2017), thus potentially underestimating the actual extent of centrifugal support. Additionally, several methods were used by Tobin et al. (2020b) to derive the disk mass in HOPS-370. First, they used the continuum emission as 1.3 mm, 0.87 mm and 9 mm to derive a value for the disk mass under the assumptions of isothermal and optically thin dust emission. The disk mass at each wavelength was found to be 0.048 M\u2299at 0.87 mm, 0.084 M\u2299at 1.3 mm, and 0.098 M\u2299at 9 mm. They also derive a disk mass from their MCMC radiative transfer fitting of the 0.87 mm dust continuum emission. This method resulted in a disk mass of 0.035+0.005 \u22120.003 M\u2299, which is slightly lower than the earlier estimations using the optically thin assumption. The lower value is likely due to the maximum dust grain size fit of the 0.87 mm emission being 432 \u00b5m, meaning that the dust in the model will radiate more efficiently than under the assumptions made for the optically thin calculation. Thus, to be consistent with the dust grain properties later used in our analysis (Section 3.3), we take the disk mass to be Md = 0.035+0.005 \u22120.003 M\u2299for our estimation. It is important to mention that the uncertainty of the measured disk mass reported by Tobin et al. (2020b) are the 1\u03c3 statistical uncertainties from their MCMC radiative transfer fitting. Thus, these uncertainties likely do not reflect the entire uncertainty of the measured disk mass. Tobin et al. (2020b) also fit for the disk mass in their 12 molecular line fits. However these derived disk masses are highly sensitive to the chosen molecular abundances in the fit, and may not be as reliable. This further motivates our choice to use the best-fit disk mass estimated from the dust emission fitting. 2.2.2. The Temperature Distribution of the Disk The gas temperature distribution of the HOPS-370 protostellar disk is modeled using a parameterized equation given by Td(r) = T0 \u0010 r 1 au \u0011\u2212q , (4) where T0 is the gas temperature at 1 au and q is a power-law index, which is fixed to be 0.35 in the 12 molecular line fits by Tobin et al. (2020b). The bestfit average value of T0 was found to be 980.0 \u00b1 0.6 K, where the errors are also found using the median absolute deviation scaled to one standard deviation of the normal distribution. Using the protostellar disk radius of Rd = 94.4 \u00b1 12.6 au, we find the temperature at the edge of the disk to be Td = 199.0 \u00b1 9.3 K.1 With this gas temperature, the isothermal sound speed at the disk edge can be estimated by Cs = \u0012 kBTd \u00b5mmH \u00130.5 , (5) where kB is the Boltzmann constant, \u00b5m = 2.37 is the mean molecular weight for a molecular gas with solar metallicity, and mH is the mass of a hydrogen atom. The isothermal sound speed is estimated to be Cs = 833.0 \u00b1 19.5 m s\u22121, which is higher than the typically assumed value of 200 m s\u22121 (e.g., Lee et al. 2021b, 2024). 2.2.3. The Density at the Disk-Envelope Interface The density at the disk-envelope interface can be estimated via two different approaches. The first is by using the best fit values of the disk density profile, while the second is by using the best fit values of the envelope density profile, both modeled by Tobin et al. (2020b). We initially choose the former approach, since the focus of the study by Tobin et al. (2020b) was on the disk, and the observations taken likely resolve out most of the envelope emission. However, as a comparison, we do explore the latter in Appendix C. The disk density, which is related to the disk scale height and disk surface density, was modeled using the molecular line emission by Tobin et al. (2020b). The disk scale height (hd) is given by hd(r) = \u0012 kBr3Td(r) GM\u22c6\u00b5mmH \u00130.5 , (6) where M\u22c6is the protostellar mass. The disk surface density (\u03a3disk) is given by \u03a3d(r) = \u03a30 \u0012 r rc \u0013\u2212\u03b3 exp \" \u2212 \u0012 r rc \u0013(2\u2212\u03b3)# , (7) 1 Uncertainties were propagated using the publicly hosted python package: asymmetric uncertainty (Gobat 2022; https://github. com/cgobat/asymmetric uncertainty). This package uses an empirical/analytical function to model the error distributions. \fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 5 where rc is the critical radius of the disk (rc = Rd was assumed in the molecular line fitting) and \u03b3 is the surface density power-law index. The normalization constant (\u03a30) is described by \u03a30 = (2 \u2212\u03b3)Md 2\u03c0r2 c , (8) where Md is the disk mass. The radiative transfer modeling of the 12 molecular line fits give an average value of the surface density power-law index to be \u03b3 = 0.9 \u00b1 0.2. Finally, the disk volume density (\u03c1d) is expressed as \u03c1d(r) = \u03a3d(r) \u221a 2\u03c0 hd(r) exp \u22121 2 \u0014 z hd(r) \u00152! , (9) where z is the height above the disk midplane and the other parameters are as described before. For simplicity, we approximate the density at the midplane (z = 0), which allows the exponential to go to 1 as the inner terms go to 0. We are left with a simplified equation of \u03c1d(r) = \u03a3d(r) \u221a 2\u03c0 hd(r), (10) where we can then plug in our known values to calculate the approximate density at the disk edge. By plugging in r = Rd = 94.4 \u00b1 12.6 au and the other parameters previously mentioned, we find a disk scale height of hd = 16.2 \u00b1 3.3 au, a disk surface density of \u03a3d = 2.2 \u00b1 0.9 g cm\u22122, and a disk volume density of \u03c1d = 3.7 \u00b1 1.7 \u00d7 10\u221215 g cm\u22123 at the edge of the disk. 2.3. Estimating the Magnetic Field Strength at the Edge of the Disk Yen et al. (2021a) originally estimated the corescale plane-of-sky magnetic field strength using 850 \u00b5m dust polarization legacy observations from the Submillimetre Common-User Bolometer Array (SCUBA) Polarimiter (SCUPOL) on the James Clerk Maxwell Telescope (JCMT). A magnetic field strength of Bpos = 0.54 \u00b1 0.25 mG is derived for HOPS-370 using the Davis-Chandrasekhar-Fermi (DCF) method (Davis 1951; Chandrasekhar & Fermi 1953). Updated observations have since been taken using the new SCUBA-2 detector and POL-2 polarimiter (Kao & Yen et al., in prep.), providing a new and more precise magnetic field strength estimate of Bpos = 0.50 \u00b1 0.13 mG for HOPS370. In addition to the magnetic field strength, the average core mass and core density were estimated to be Mc = 37.0 \u00b1 2.6 M\u2299and \u03c1c = 1.9 \u00b1 0.2 \u00d7 10\u221218 g cm\u22123, respectively, within a core radius of \u223c0.07 pc. This is the same radius in which the magnetic field strength was also estimated. In order to scale this magnetic field strength from the core-scale to the edge of the disk and obtain a value for B\u03d5, several assumptions need to be made. 2.3.1. The Magnetic Field Density Relation The general form of the most commonly cited magnetic field-density (B-n) relation is written as B = B0 \u0012 n n0 \u0013\u03ba , (11) where B0 is the initial magnetic field strength to be scaled, n and n0 are scaled and initial number densities, respectively, and \u03ba is the power-law index (Crutcher et al. 2010; Crutcher & Kemball 2019; Pattle et al. 2023). For clouds undergoing spherical collapse with flux-freezing, \u03ba is \u223c2/3 (Mestel 1966), while collapse models with ambipolar diffusion predict \u03ba evolves from 0 at the initial collapse to 0.5 in the later stages (Mouschovias & Ciolek 1999). Since our primary assumption is that ambipolar diffusion is the main diffusion process, and HOPS-370 is an evolved Class 0 protostar, we take \u03ba = 0.5. Thus, the total magnetic field strength can be scaled by Btot,d = Btot,c \u0012\u03c1d \u03c1c \u00130.5 , (12) where \u03c1c and \u03c1c are the volume densities at the core and disk scales, while Btot,c and Btot,d are the total magnetic field strengths at the core and disk scales, respectively (hereafter, referred to as the C04 method). 2.3.2. Magnetic Field Strength Scaling, Correction and Estimation In order to estimate the magnetic field strength at the edge of the disk (B\u03d5) as fairly as possible, we first convert the plane-of-sky magnetic field strength (Bpos,c) to the total magnetic field strength (Btot,c) using two different statistical relations for the sake of completeness. We first use the relation derived from a sample of observations (Crutcher et al. 2004), given as Btot = \u0012 4 \u03c0 \u0013 Bpos, (13) which gives a statistical average of the total magnetic field strength. Using this relation, we derive a total magnetic field strength of Btot,c = 0.64 \u00b1 0.16 mG. Additionally, Liu et al. (2021) derive the relation Btot = r 3 2Bpos, (14) using 3D MHD simulations and radiative transfer calculations to produce synthetic polarization images to find \f6 Thieme et al. Table 1. Overview of parameters used for the estimation of \u03b7AD Parameter Description Parameter Value Protostar + Disk Properties Protostellar mass M\u22c6(M\u2299) 2.5+0.2 \u22120.2 Disk mass Md (M\u2299) 0.035+0.005 \u22120.003 Disk radius Rd (au) 94.4+12.6 \u221212.6 Critical radius rc (au) = Rd Temperature at 1 au T0 (K) 980.0+0.6 \u22120.6 Temperature power-law index q 0.35 Surface density power-law index \u03b3 0.9+0.2 \u22120.2 Disk temperature at Rd Td (K) 199.0+9.3 \u22129.3 Disk sound speed at Rd Cs (m s\u22121) 833.0+19.5 \u221219.5 Disk scale height at Rd hd (au) 16.2+3.3 \u22123.3 Disk surface density at Rd \u03a3d (g cm\u22122) 2.2+0.9 \u22120.9 Disk volume density at Rd \u03c1d (10\u221215g cm\u22123) 3.7+1.7 \u22121.7 Protostellar Core Properties Core mass Mc (M\u2299) 37.0+2.6 \u22122.6 Core volume density \u03c1c (10\u221218g cm\u22123) 1.9+0.2 \u22120.2 Plane-of-sky B-field strength Bpos,c (mG) 0.5+0.1 \u22120.1 Core to Disk Scale B-Field Properties B-n relation power-law index \u03ba 0.5 Total Core B-field strength Btot,c (mG) 0.6+0.2 \u22120.2 Total Disk B-field strength Btot,d (mG) 28.3+9.9 \u22129.8 References\u2014Tobin et al. (2020b), Kao & Yen et al. (in prep.), this work. a statistical average of the total magnetic field strength. Using this relation, we derive a total magnetic field strength of Btot,c = 0.61 \u00b1 0.16 mG. These two values are within error, and thus, indistinguishable for our purpose. We therefore simply adopt the value using the statistical relation from Crutcher et al. (2004) for the remainder of this paper. We now scale our total core-scale magnetic field strength of Btot,c = 0.64 \u00b1 0.16 mG down to the disk scales using Equation 12. We find Btot,d = 28.3\u00b19.9 mG using the C04 method. Since B\u03d5 should be the dominant magnetic field component at the edge of the protostellar disk, we assume B\u03d5 \u223cBtot,d in our estimations. This is discussed later in Section 4.1.2. 2.4. Scaling Factors for the Infalling and Rotational Velocities Here, we discuss the scaling factors of \u03b4r and \u03b4\u03d5, which describe the deviations of the infalling and rotational velocities, respectively, from the Keplerian velocity (Equation A.9). As briefly mentioned in Appendix A, recent MHD simulations of protostellar disk formation including ambipolar diffusion find that u\u03d5 is very close to Keplerian at the disk edge (\u03b4\u03d5 \u22730.9), while ur is significantly less (\u03b4r \u22720.5) than the Keplerian velocity, possibly by even a factor of a few, less than one order of magnitude (Lee et al. 2021a). For the deviation of the rotational velocity from Keplerian, we will initially assume \u03b4\u03d5 = 1 as a conservative estimate, and since the modeling of the HOPS-370 protostellar disk already assumes the rotational velocity structure of the disk is Keplerian. For the deviation of the infall velocity from Keplerian, it is less straight-forward but we can still make some estimates. Recent observations of the young Class I protostar, L1489 IRS, revealed a so-called \u201cslow\u201d infall, where the velocity structure of the infalling envelope was modeled to be 2.5 times slower than freefall (Sai et al. 2022). If we use the quantities for HOPS-370 and this assumption of vinf = 0.4vff, we find \u03b4r \u223c0.6. For a conservative measure, we initially assume \u03b4r = 0.8. Modeling \u03b4r in HOPS-370 would provide further constraints on our ambipolar diffusivity coefficient, however, this is currently beyond the scope of this paper and will be left for a future study. How these two values effect the ambipolar diffusivity coefficient estimation is further explored in Section 3.3.3. 3. RESULTS & ANALYSIS 3.1. The First Estimation of the Ambipolar Diffusivity Coefficient from Observations In the previous sections, we obtained all of the necessary values needed to estimate \u03b7AD for the first time. An overview of all the parameters obtained in the previous sections is shown in Table 1. We make an estimation using the B\u03d5 derived from the C04 method. We plug in the values of \u03b4r = 0.8, \u03b4\u03d5 = 1.0, Cs = 833.0+19.5 \u221219.5 m s\u22121, Rd = 94.4+12.6 \u221212.6 au, M\u22c6= 2.5+0.2 \u22120.2 M\u2299, Md = 0.035+0.005 \u22120.003 M\u2299 \u03c1d = 3.7+1.7 \u22121.7 \u00d7 10\u221215 g cm\u22123 B\u03d5 = 28.3+9.9 \u22129.8 mG, into the normalized ambipolar diffusivity coefficient equation (Equations 2 and 3) to obtain \u03b7AD = 1.7+1.5 \u22121.4 \u00d7 1019 cm2 s\u22121 = 2.4+2.1 \u22122.0 \u00d7 10\u22121 s. As this is the first ever estimation of the ambipolar diffusivity coefficient from observations, there are no other \fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 7 observational values to compare with. In the context of comparing to the value of this coefficient using a chemical network, this will be explored in Section 3.3. 3.2. The Dimensionless Els\u00a8 asser Number for Ambipolar Diffusion The strength of non-ideal MHD effects are quantified through the dimensionless Els\u00a8 asser numbers, which for ambipolar diffusion is given by AM = v2 A \u03b7AD\u2126K , (15) where vA is the Alfv\u00b4 en speed and \u2126K is the Keplerian rotation frequency (e.g., Wurster 2021; Cui & Bai 2021). The Alfv\u00b4 en speed is defined as vA = s B2 4\u03c0\u03c1, (16) which describes the speed of an MHD wave permeating through a dense medium. Likewise, the Keplerian rotation frequency is defined as \u2126K = r GM\u22c6 r3 . (17) Typically, AM \u226b1 represents strong coupling between the magnetic field and the neutral gas, while AM \u22721 indicates strong magnetic diffusion (e.g., Wurster 2021; Commer\u00b8 con et al. 2022). We estimate the dimensionless Els\u00a8 asser number for ambipolar diffusion to be AM = 1.7 \u00b1 1.0. This shows we are likely in the regime of stronger magnetic diffusion and indicates the importance of ambipolar diffusion in the evolution of the HOPS-370 protostellar disk. 3.3. Comparing with the Non-Ideal MHD Coefficient and Ionisation Library (NICIL) The Non-Ideal MHD Coefficient and Ionisation Library (NICIL)2 is a code to calculate the diffusion coefficients for ambipolar diffusion (\u03b7AD), Ohmic dissipation (\u03b7OD) and Hall effect (\u03b7HE) for MHD simulations using a chemical network (Wurster 2016, 2021). We aim to investigate whether our ambipolar diffusivity coefficient is consistent with one calculated by NICIL. NICIL allows for estimating these coefficients for different input parameters, such as density, temperature and magnetic field strength. Additionally, parameters for the dust grain size distribution and cosmic-ray ionization rate can be modified. First, we describe the initial parameters used for several NICIL runs (Section 3.3.1). We attempt to emulate the conditions at the edge of the disk 2 https://bitbucket.org/jameswurster/nicil/src/master/ as closely as possible by using the derived disk parameters and several different assumptions for the magnetic field strength and the cosmic-ray ionization rate (Section 3.3.2). We then explore several of the assumptions made during our estimation of the ambipolar diffusivity coefficient to see how they affect the value and its consistency with NICIL (Section 3.3.3). There are two files that we modify for these different runs in NICIL: nicil.F90 and nicil ex eta.F90. We assume a barotropic equation of state for all runs, since this is the same assumption used in the numerical simulations by Hennebelle et al. (2016). 3.3.1. Parameter Setup We first describe several modifications made to the nicil ex eta.F90 test script. This file contains the input parameters for the temperature, density and magnetic field strength. We compute the barotropic equation of state over the default temperature range of 10 K to 2 \u00d7 105 K and density range of 10\u221222 g cm\u22123 to 100.5 g cm\u22123. For the magnetic field, we employ a constant (use input B = .true.) magnetic field strength using a value of 28.3 mG (C04 method), which we estimated at the edge of the disk. We also run using the upper and lower errors on the magnetic field as the constant values to estimate an approximate error range on the NICIL ambipolar diffusivity coefficient. Additionally, NICIL has the option to vary the magnetic field using the function B = 1.34\u00d710\u22127\u221ann G (use input B = .false.). However, this magnetic field strength comes from different underlying assumptions than what we use and severely underestimates the magnetic field strengths compared to what we find, so we do not compare with this case. The dust grain and cosmic-ray ionization properties are then adjusted in the nicil.F90 main script. We use the default gas-to-dust ratio of 100 and set the number of grain size bins to 32. Tobin et al. (2020b) derive powerlaw slope of the grain distribution to be p = \u22122.63 and the maximum grain size to be amax = 432 \u00b5m, while assuming the same minimum dust grain size of amin = 0.005 \u00b5m used in their fitting. Thus, we set these parameters in NICIL accordingly. The cosmic-ray ionization rate in HOPS-370 is unknown, however the typical ISM value is usually quoted to be \u03b6CR = 10\u221217 s\u22121 (e.g., Caselli et al. 1998; McElroy et al. 2013). We initially set a constant cosmic-ray ionization rate (zeta of rho = .false.) of \u03b6CR = \u03b60 = 10\u221217 s\u22121 in the script. However, we also vary the cosmic-ray ionization rates between 10\u221219 s\u22121 < \u03b60 < 10\u221215 s\u22121 as another approximate \u201cerror\u201d range. This should be a typical range in dense molecular clouds inferred from chemical analyses (Caselli et al. 1998). In addition, we also run us\f8 Thieme et al. Figure 1. Comparison between our derived value of \u03b7AD and the NICIL calculated values of \u03b7AD assuming a barotropic equation of state and a constant magnetic field strength (C04 method). Our derived \u03b7AD is marked by the black circle and also printed in the top right of each plot, along with the values used for the estimation just below. (Left Column) Uses a constant (unattenuated) cosmic-ray ionization rate for the NICIL calculation. (Right Column) Uses a varied (attenuated) cosmic-ray ionization rate for the NICIL calculation. The dashed black lines indicate the NICIL calculated \u03b7AD based on the magnetic field strength uncertainties. The shaded blue areas represent \u03b7AD calculated by NICIL for different ranges of \u03b60 between 10\u221216 \u221210\u221218 s\u22121 (darker shade) and 10\u221215 \u221210\u221219 s\u22121 (lighter shade). The mass volume density and H2 number density are related by \u03c1 = mH\u00b5H2nH2, where mH is the mass of the hydrogen atom and \u00b5H2 is the mean molecular weight per molecular hydrogen (\u00b5H2 = 2.8). ing a varied cosmic-ray ionization rate (zeta of rho = .true.), which mimics attenuated cosmic-rays via the relation of \u03b6CR = \u03b60e\u2212\u03a3/\u03a3CR + \u03b6min. In this case, we set \u03b60 = 10\u221217 s\u22121 and \u03b6min = 10\u221222 s\u22121 (default value) in the script. The the gas surface (column) density (\u03a3) is directly calculated from several parameters when running the code. The cosmic-ray attenuation depth (\u03a3CR) is a constant and kept at the default value of 96 g cm\u22122. We also vary the cosmic-ray ionization rates between 10\u221219 s\u22121 < \u03b60 < 10\u221215 s\u22121 in this case as well. We set the mean molecular weight to be \u00b5 = 2.37 for consistency. All other parameters in both scripts are kept as the default values. 3.3.2. Initial Comparison We run NICIL using the aforementioned parameters and compare to derived values of \u03b7AD in Figure 1. The columns correspond to the two different cosmic-ray ionization rate assumptions in NICIL. The derived value of \u03b7AD is shown in the top right corner of the left panel plot along with the magnetic field strength from the C04 method. We describe each case in more detail below. Our \u03b7AD result surprisingly consistent to the \u03b7AD calculated by NICIL. Constant B & \u03b6CR: The left panel of Figure 1 use a constant magnetic field strength (use input B = .true.) of B = 28.3+9.9 \u22129.8 mG from the C04 method (left panel). In addition, we use a constant (unattenuated) cosmic-ray ionization rate (zeta of rho = .false.). As mentioned in the previous section, we run for the derived magnetic field strength of 28.3 mG, and then perform subsequent runs using the upper/lower errors on the magnetic field strength (dashed black lines). We assumed the cosmic-ray ionization rate to be 10\u221217 s\u22121 for the previous three calculations, but also varied it between 10\u221218 s\u22121 \u2264\u03b60 \u226410\u221216 s\u22121 (darker-blue shaded area) and 10\u221219 s\u22121 \u2264\u03b60 \u226410\u221215 s\u22121 (lighter-blue shaded area) assuming the magnetic field strength of 28.3 mG. The derived \u03b7AD using the C04 magnetic field strength is surprisingly consistent with the results from NICIL. If the infall velocity is much smaller than Keplerian rotation, then both values would become more consistent (Section 3.3.3). The cosmic-ray ionization rates have more of an affect on the predicted ambipolar diffusivity coefficient from NICIL compared to the error on our magnetic field strength. Higher cosmic-ray ionization rates correspond to smaller \u03b7AD values and viceversa. The choice of the chemical network would also impact the estimation by NICIL, and thus our overall comparison. This is discussed more in Section 4.1.3. Constant B & Varied \u03b6CR: The right panel of Figure 1 uses the same magnetic field parameters as the left, however, the cosmic-ray ionization rate is attenuated (zeta of rho = .true.). We do this in order to understand what effect this has on our derived ambipolar diffusivity coefficient. As we can see, the attenuated cosmic-rays only affect the very high densities \u227310\u221212 g cm\u22123, where the ambipolar diffusivity coefficient begins to increase. It does not affect the density regime in which our ambipolar diffusivity coefficient is \fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 9 Figure 2. Comparison between the effect of different values of \u03b4r (top row), \u03b4\u03d5 (middle row) and Rd/RAD (bottom row) on our derived \u03b7AD and the NICIL calculated values of \u03b7AD assuming a barotropic equation of state, constant magnetic field strength from the C04 method and constant (unattenuated) cosmic-ray ionization rate. As in Figure 1, our derived \u03b7AD is marked by the black circle and also printed in the top right of each plot, while the black lines and shaded blue areas also have the same meanings as in Figure 1. estimated. Thus, our derived \u03b7AD is still consistent with these results from NICIL. 3.3.3. Varied Parameters There are several parameter assumptions made in our estimation using the ambipolar diffusivity coefficient equation that could vary the resulting \u03b7AD. First, our initial calculation assumes \u03b4r = 0.8 and \u03b4\u03d5 = 1.0. Additionally, the results from Hennebelle et al. (2016) and Hennebelle et al. (2020) indicate that the ratio of the disk radius from their simulations to their predicted disk radius from their analytical equation, Rd/RAD, could vary between 0.5 to 2, particularly for lower mass cores. Therefore, we vary at each parameter individually, while the others are kept at their initially assumed values, to see the magnitude in difference for each. We compare to our NICIL run using constant B from the C04 method and constant \u03b6CR (Figure 2, left panel). For \u03b4r, we compare values of 0.8, 0.5 and 0.1 (Figure 2, top row). As mentioned previously, \u03b4r could be a factor of a few lower than the Keplerian velocity in simulations, and was found to be \u223c0.6 in previous observations of the Class I protostar, L1489 IRS. As this value could vary quite considerably depending on the environment, it will have more of an impact on our estimated ambipolar diffusivity coefficient than \u03b4\u03d5, even though \u03b7AD scales as \u223c\u03b42 \u03d5. In the case where \u03b4r = 0.1, it is much more consistent with NICIL as the estimated ambipolar diffusivity coefficient is about an order of magnitude lower, potentially hinting at the possibility of slow infall in HOPS-370. For \u03b4\u03d5, we compare values of 1.0, 0.9 and 0.8 (Figure 2, middle row). Since \u03b4\u03d5 should be \u22730.9, it will not have too much affect on our derived ambipolar diffusivity coefficient, which are all within error in these cases. We only demonstrate \u03b4\u03d5 = 0.8 as a more extreme scenario, but still the effect is less than \u03b4r due to these constraints. It would still be interesting to try to estimate if there is any deviation from Keplerian rotation in the rotational velocity structure at the edge of the disk, as it still lowers the value, if only even a little. For Rd/RAD, we compare values of 0.5, 1.0 and 2.0 (Figure 2, bottom row). As previously mentioned, the ratio of the actual disk radius to the predicted disk radius due to ambipolar diffusion from Equation 1 could vary between 0.5 to 2, particularly for lower mass cores \f10 Thieme et al. (Hennebelle et al. 2016, 2020). Although HOPS-370 is considered to be a more intermediate mass Class 0/I, we would still like to investigate how the value varies between these two extreme cases. The Rd/RAD factor also doesn\u2019t really affect the calculation of \u03b7AD too much, which is similar to \u03b4\u03d5. Rd/RAD is slightly more consistent with NICIL when Rd/RAD > 1, while the right panel of Figure 2 in Hennebelle et al. (2016) shows consistently lower Rd/RAD < 1 for protostars M\u22c6+Md < 5 M\u2299. This should be investigated in numerical simulations for a mass range around the HOPS-370 protostar+disk mass as the spread can become quite noticeable when zooming in on very low-mass simulations (M\u22c6+ Md < 0.5 M\u2299) in the left panel of Figure 2 in Hennebelle et al. (2016). 4. DISCUSSION 4.1. Validity of Assumptions Several assumptions are made in the derivation, estimation and comparison to theoretical values of the ambipolar diffusivity coefficient. In this section, we explore these assumptions in detail and discuss how they could effect our results. 4.1.1. Derivation of the Ambipolar Diffusivity Coefficient Equation In Section 2, we have listed several assumptions in the derivation of Equation 1. The first is that the main diffusion process is ambipolar diffusion. There are many factors shown to alleviate the effects of magnetic braking to form large, protostellar disks in MHD simulations. These mainly include non-ideal MHD (e.g., Li et al. 2011; Dapp et al. 2012; Tsukamoto et al. 2015; Wurster et al. 2016; Tsukamoto et al. 2017; Wurster et al. 2019; Zhao et al. 2020a; Wurster et al. 2021), misalignment between the magnetic field and rotation axis (e.g., Hennebelle & Ciardi 2009; Li et al. 2013; Tsukamoto et al. 2018; Hirano et al. 2020) and turbulence (e.g., Seifried et al. 2013; Li et al. 2014; Seifried et al. 2015). From an observational standpoint, there are several key results to consider. Magnetic field orientations in low-mass protostars indicate that the field orientation is preferentially randomly aligned with the rotation axis (e.g., Hull et al. 2013; Yen et al. 2021b) However, recent results show no apparent correlation between the misalignment angle of the magnetic field and apparent disk size measured from the dust continuum (Yen et al. 2021a). Yen et al. (2021a) also conclude that the turbulence measured from the non-thermal linewidth at core-scale does not correlate with the apparent disk size either. Observations of angular momentum profiles in protostellar envelopes do imply that there is some turbulence present (Pineda et al. 2019; Gaudel et al. 2020; Sai et al. 2023), although the level of turbulence is not directly quantified. Thus, non-ideal MHD likely plays an important role in protostellar disk formation. As far as which non-ideal MHD effect (ambipolar diffusion, Ohmic dissipation or the Hall effect) is most important overall, simulations show ambipolar diffusion is an efficient process in parts of the disk and envelope that can regulate the properties of the disk (e.g., Tsukamoto et al. 2023). Ohmic dissipation is only efficient at high densities and likely does not play much of a role in the envelope itself (e.g., Marchand et al. 2016; Wurster et al. 2018a; Wurster 2021). The Hall effect has been shown to effectively disappear shortly after the formation of the protostellar disk (Zhao et al. 2020b; Lee et al. 2021b). Therefore, ambipolar diffusion may be the most important non-ideal MHD effect, especially when the protostar + disk system becomes more evolved. This is where comparing directly with non-ideal MHD simulations would help us to understand the non-ideal MHD effects more deeply. Since NICIL does also calculate the non-ideal MHD coefficients for Ohmic dissipation (\u03b7OD) and the Hall effect (\u03b7HE), it is interesting to compare them to the ambipolar diffusivity coefficient. We show a comparison between each of the diffusivity coefficients from our NICIL runs in Figure 3. We see that our derived \u03b7AD is clearly in an ambipolar diffusion dominated density regime, under our assumptions made for our NICIL runs. This is in favor of the first assumption stated to derive Equation 1, that ambipolar diffusion is the main diffusion process. The coefficient for Ohmic dissipation starts to become prominent towards the highest density regimes, which is consistent with previous findings (e.g., Marchand et al. 2016; Wurster et al. 2018a; Wurster 2021). The Hall effect seems to be dominant at intermediate density regimes, though it still shows some contribution in the density regime where our ambipolar diffusion value is calculated. We also note that in our NICIL run in the bottom left panel, the Hall coefficient becomes negative at very low densities approaching 10\u221218 g cm\u22123 in the case of a low cosmic-ray ionization rate of \u03b60 = 10\u221219 s\u22121. The cosmic-ray ionization rate will impact the efficiency of non-ideal MHD effect (e.g., Wurster et al. 2018b; Kuffmeier et al. 2020), and thus should be studied in the environment of HOPS-370 to further constrain the comparisons with NICIL. The cosmic-ray ionization rate in the inner envelope of Class 0 protostar, B335, was previously found to be enhanced (\u03b6CR \u223c10\u221214 s\u22121), which could explain the extremely small (< 10 au) inferred protostellar disk (Cabedo et al. 2023). Addition\fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 11 Figure 3. Relative comparison of the diffusivity coefficients for ambipolar diffusion (\u03b7AD), Ohmic dissipation (\u03b7OD) and the Hall effect (\u03b7HE) calculated by NICIL assuming a barotropic equation of state, constant magnetic field strength and cosmic-ray ionization rate. The symbols and labels have the same meaning as in Figure 1, except we only show the lighter shaded blue area for \u03b60 between 10\u221215 \u221210\u221219 s\u22121 for each coefficient. Our derived value shows that HOPS-370 lies in an ambipolar diffusion dominated region. ally, a new large scale study probing the NGC 1333 region of the Perseus also finds an enhanced cosmic-ray ionization rate (\u03b6CR \u227310\u221216.5 s\u22121) across the molecular cloud, which is consistent with the small (< 50 au) disks in that region (Pineda et al. 2024). Future studies also probing the ion-neutral drift in HOPS-370 could help to further understand the role of ambipolar diffusion in this source. Yen et al. (2018) tried to constrain the ion-neutral velocity drift in the young Class 0 protostar, B335, however, only an upper limit was obtained. This could be due to B335 being too young, as some simulations have shown this velocity drift could be more observable in more evolved Class 0/I protostars (e.g., Tsukamoto et al. 2020). Since HOPS370 is more evolved, it could be an ideal target for this kind of study in the future. Next, the angular momentum is counteracted by magnetic braking resulting in the advection and braking timescales to be of the same order. The equations for the advection and braking timescales are given by Equations A.1 and Equations A.2, respectively. We make an estimation of the advection timescale under the same assumption used for our ambipolar diffusivity estimate, where ur = 0.8vkep, giving us a value of \u03c4adv \u223c3.6 \u00d7 109 s, which is also a lower limit. For the braking timescale, since we do not directly know the poloidal component (Bz) of the magnetic field strength, we make an approximation of BzB\u03d5 \u2248B2 tot and use our value derived from the C04 method. This results in a lower limit of the braking timescale of \u03c4br \u223c5.4 \u00d7 108 s. These values are within one order of magnitude difference, and show that this assumption can hold in HOPS370. We note that we use this exact assumption to estimate Bz in Section 4.1.2, therefore, using that value here presents a circular argument which is why we simply estimate the lower limits for \u03c4adv and \u03c4br. Further modeling of the infall velocity structure and magnetic field components (Br, Bz, B\u03d5) would be necessary to confirm. Then, the toroidal field generated by differential rotation is offset by the ambipolar diffusion in the vertical direction resulting in the Faraday induction and vertical diffusion timescales to be of the same order. The equations for the Faraday induction and vertical diffusion timescales are given by Equations A.3 and Equations A.4, respectively. We can obtain a lower limit approximation of the Faraday induction timescales by assuming Bz \u223cB\u03d5 \u223cBtot. This gives a value of \u03c4far \u223c4.9 \u00d7 108 s. The vertical ambipolar diffusion timescales would need to use our derived value, thus we check for self-consistency. We find \u03c4diff \u223c3.4 \u00d7 109 s. Since varying some of the parameters in Section 3.3.3 lower the value of \u03b7AD, \u03c4diff could also be considered as a lower limit. Both values are within one order of magnitude difference, showing that this assumption can hold in HOPS-370. Again, we do not use the Bz in Section 4.1.2 to avoid any circular arguments. Additionally, the infalling and rotational velocities of the gas near the disk edge both scale with the Keplerian velocity. The disk radius derived for HOPS-370 from the radiative transfer modeling indicates that the rotational velocity (u\u03d5) should be Keplerian in nature. As for the infalling velocity, further modeling needs to be done to see how much ur deviates from Keplerian at the disk \f12 Thieme et al. edge. For now, the assumption of the rotational velocity holds, while the assumption for the infalling velocity should to be further modeled. Lastly, the gas near the disk edge has Keplerian velocity and is in vertical hydrostatic equilibrium. Again, the gas disk in HOPS-370 is clearly resolved by the observations by Tobin et al. (2020b) and the best-fit disk parameters were found by fitting with radiative transfer models assuming Keplerian rotation and hydrostatic equilibrium. Many previous studies have also clearly resolved Keplerian rotating disks in young Class 0 and Class I protostars (e.g., Tobin et al. 2012; Murillo et al. 2013; Yen et al. 2014, 2017; Ohashi et al. 2023). Even if the assumptions in the fitting are wrong, they are the same assumptions we use and we are still using a \u201cbestfit\u201d value, which indicates this model does provide a good fit to the data. Therefore, the values properties derived for the HOPS-370 protostellar disk clearly should satisfy both assumptions. 4.1.2. Quantities and Relations used for the Ambipolar Diffusivity Coefficient Estimation Arguably the most important assumption in our estimation of \u03b7AD is how the envelope scales from core-scale down to the edge of the protostellar disk. As previously stated, early theoretical works predict \u03ba in Equation 4 to be \u223c2/3 for clouds undergoing spherical collapse with flux-freezing (Mestel 1966), while \u03ba \u223c0.5 for a collapsing cloud with ambipolar diffusion (Mouschovias & Ciolek 1999). This was the basis of our initial assumptions, however, it is not so straight forward. The recent review by Pattle et al. (2023) shows \u03ba derived from observations of molecular clouds can vary quite a bit, possibly due to different environmental factors. These observations probe the large scale molecular clouds, filaments and cores whose magnetic field imprint could be inherently different than the magnetic fields near a protostellar disk. Additionally, a magnetic field density relation recently derived by Lee et al. (2024) for inside a collapsing, protostellar envelope is explored in Appendix D. The magnetic field strength derived from this relation is compatible with our estimates, however, needs to be further investigated due to discrepancies in the model presumptions. Observationally, Yen et al. (2023) recently derived a magnetic field density relation from the core to inner envelope scale in the young, Class 0 protostar HH 211. They find \u03ba \u223c0.36, which fits into the assumption that ambipolar diffusion is playing a role to partially decoupled the magnetic field from the neutral matter. Their inner envelope magnetic field strength was derived using a force-balance equation (Koch et al. 2012), rather than the DCF method. The core-scale magnetic field strength estimated by Kao & Yen et al. (in prep.) was derived using the DCF method, which has several uncertainties associated with it due to the assumptions of equipartition, isotropic turbulence, projected polarization angle on the plane-of-sky, and more (e.g., Liu et al. 2021, 2022a; Chen et al. 2022; Liu et al. 2022b; Myers et al. 2023). These uncertainties may cause the DCF estimate to overestimate the magnetic field strength, which would impact our ambipolar diffusivity coefficient estimation. In the best case scenario, future observations to derive the inner envelope strength near the disk in HOPS-370 could alleviate the need to even use a magnetic field density relation. Otherwise, if the magnetic field strength cannot be derived close enough to the disk edge, it can still be estimated in the envelope to derive a magnetic field density relation, where the magnetic field strength could further be scaled down to the edge of the disk. We have also assumed that Btot \u2248B\u03d5, and that B\u03d5 is the dominant component of the magnetic field at the edge of the protostellar disk. Our value for Btot at the core-scale is a statistical average based on a large sample of observations, which may or may not necessarily be applied to only a single source. This is, however, the only current way we obtain a total magnetic field strength from the plane-of-sky magnetic field component and should be investigated further. To see whether B\u03d5 is really dominant in our case, we estimate Bz = 4.2\u00b12.7 mG using Equation A.5. This shows that B\u03d5 is the dominant component in our case, and thus is a reasonable assumption in our ambipolar diffusivity coefficient estimation. 4.1.3. Comparison with NICIL and Input Values While the cosmic-ray ionization rate and dust grain properties needed for NICIL are not inherently part of our derived ambipolar diffusion equation, they still play a role in the efficiency of non-ideal MHD diffusivities (e.g., Zhao et al. 2016; Dzyurkevich et al. 2017; Wurster et al. 2018b; Zhao et al. 2018; Kuffmeier et al. 2020; Guillet et al. 2020; Zhao et al. 2021; Kobayashi et al. 2023). Several studies have shown that disk formation is suppressed in the presence cosmic-ray ionization rates higher than the canonical value of 10\u221217 s\u22121 in dense cores (e.g., Zhao et al. 2016; Wurster et al. 2018b; Kuffmeier et al. 2020). Large numbers of small dust grains can also influence the ionization degree, and thus the non-ideal MHD diffusivities (e.g., Zhao et al. 2016; Dzyurkevich et al. 2017; Zhao et al. 2018; Koga et al. 2019; Marchand et al. 2020). Tobin et al. (2020b) do constrain the maximum grain size, while the minimum grain size is set as a fixed parameter in their model fit\fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 13 ting. We did explore how much the minimum grain size affects the calculated \u03b7AD from NICIL by re-running our constant B (C04 method) and constant \u03b6CR NICIL runs, for minimum grain sizes of 0.01 \u00b5m, 0.1 \u00b5m and 1.0 \u00b5m. However, the difference was indistinguishable, and thus, the resulting \u03b7AD from NICIL may rely more heavily on the choice of chemical network. We also checked if the number of grain size bins used affected the results, but still the results did not change. It is important to note that the derived dust grain properties are that of the disk, and not the envelope. Also, there are currently no studies exploring the cosmic-ray ionization rate in the disk or envelope of HOPS-370. Determining the cosmic-ray ionization rate and dust grain properties in the HOPS-370 protostellar envelope would allow for a better comparison to NICIL. Our comparison with NICIL simply represents the closest theoretical scenario we can achieve by using the values derived from observations. Therefore, further constraints on the properties of the disk and envelope environment, as well as, comparisons with actual non-ideal MHD simulations should be carried out. 5. CONCLUSION We present the first estimation of the ambipolar diffusivity coefficient using an analytical equation describing the protostar and disk properties due to ambipolar diffusion. We show an illustrative schematic of the HOPS370 protostellar system to bring together and summarize our results in the context of the multi-scale observations needed for this study (Figure 4). The main results of this paper are as follows: 1. We derive a generalized analytical expression for the ambipolar diffusivity coefficient in terms of observable quantities in protostellar environments. We show that this relation should be valid, regardless of the global magnetic field orientation with respect to the disk rotation axis. 2. We make the first estimation of the ambipolar diffusivity coefficient to be \u03b7AD = 1.7+1.5 \u22121.4 \u00d7 1019 cm2 s\u22121 at the edge of the HOPS-370 protostellar disk, under the assumption that the magnetic field scales with density (Crutcher et al. 2004). We use the Alfv\u00b4 en speed and Keplerian rotation frequency to estimate the dimensionless Els\u00a8 asser number for ambipolar diffusion to be AM = 1.7+1.0 \u22121.0, indicating that ambipolar diffusion is more dynamically important in the region at the edge of the protostellar disk. Estimates of the ambipolar diffusivity coefficient using the inner envelope density, rather than the disk-edge density yields indistinguishable results. 3. We use the Non-Ideal MHD Coefficient and Ionisation Library (NICIL) to calculate the non-ideal MHD coefficients using the the physical conditions observed in HOPS-370. We show that the ambipolar diffusivity coefficient from NICIL using various magnetic field strength and cosmic-ray ionization properties is consistent with our derived value. We vary the less certain parameters of \u03b4r, \u03b4\u03d5 and Rd/RAD in the ambipolar diffusivity coefficient equation to find the derived value becomes more consistent for decreasing \u03b4r and \u03b4\u03d5 and increasing Rd/RAD. 4. We plot the Ohmic dissipation and Hall effect coefficients along side the ambipolar diffusivity coefficient calculated by NICIL. We find that our derived value shows HOPS-370 lies in an ambipolar diffusion dominated region. This supports the main assumption in the derivation of Equation 1 that ambipolar diffusion is the main diffusion process. When assessing the other assumption made for our derivation of the ambipolar diffusivity equation, we show that they should be valid for HOPS-370. 5. We have demonstrated a new methodology for understanding the role of ambipolar diffusion during protostellar disk evolution. Future studies including more sources and more detailed modeling will help to fully understand the role of non-ideal MHD effects in observations of the earliest stages of protostellar disk formation and evolution. ACKNOWLEDGMENTS We thank the anonymous referee for their helpful comments and suggestions on this manuscript. This work used high-performance computing facilities operated by the Center for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University. This equipment was funded by the Ministry of Education of Taiwan, the National Science and Technology Council of Taiwan, and National Tsing Hua University. S.-P.L. and T.J.T. acknowledge grants from the National Science and Technology Council (NSTC) of Taiwan 106-2119-M007-021-MY3, 109-2112-M-007-010-MY3 and 112-2112M-007-011. Y.-N.L. acknowledges support from the National Science and Technology Council, Taiwan (NSTC 112-2636-M-003-001) and the grant for Yushan Young Scholar from the Ministry of Education. S.-J.L. acknowledges the grants from the National Science and Technology Council (NSTC) of Taiwan 111-2124-M001-005 and 112-2124-M-001-014. H.-W.Y. acknowledges support from the NSTC grant 110-2628-M-001\f14 Thieme et al. Figure 4. Schematic of the HOPS-370 protostellar system. (Left) 0.85 mm continuum emission of the Orion A molecular cloud taken by the JCMT (Kao & Yen et al., in prep.). The contour levels shown are 3, 5, 10, 15, 30, 50, 100, 300 and 500\u03c3, where \u03c31.3mm = 15.2 mJy beam\u22121. The location of HOPS-370 is shown with a yellow star, with the protostellar class and distance listed. (Right Top) 0.87 mm continuum emission of the protostellar disk around HOPS-370 with self-contour levels of 3, 5, 10, 15, 30, 50 and 100\u03c3, where \u03c30.87mm = 0.39 mJy beam\u22121 (Tobin et al. 2020b). CH3OH and SO integrated-intensity contours are shown in green and orange, respectively, with contour levels of 3, 5, 10, 15, 30\u03c3, where \u03c3CH3OH = 26.2 mJy beam\u22121 km s\u22121 and \u03c3SO = 32.4 mJy beam\u22121 km s\u22121. These two molecular lines were shown to trace the largest disk radius when modeled together. The position of the continuum peak is marked with a yellow star. (Right Bottom) The modeling and results of our ambipolar diffusivity coefficient estimation. The best-fit disk density profile (for z = 0; i.e. the midplane) is shown in log scale, along with the best-fit quantities used in Equation 2 and our estimated ambipolar diffusivity coefficient at the edge of the disk. \fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 15 003-MY3 and from the Academia Sinica Career Development Award (AS-CDA-111-M03). Software: Astropy (Astropy Collaboration et al. 2013, 2018, http://astropy.org), asymmetric uncertainty (Gobat 2022), Matplotlib (Hunter 2007, http: //matplotlib.org/), proplot (Davis 2021), Numpy (van der Walt et al. 2011, http://numpy.org/) APPENDIX A. DERIVATION OF THE AMBIPOLAR DIFFUSIVITY COEFFICIENT RELATION The equation describing the disk radius due to ambipolar diffusion first presented by Hennebelle et al. (2016), and later by Lee et al. (2021b, 2024), make a number of simplifications. Here, we derive a new relationship between the physical properties at the diskenvelope interface to the ambipolar diffusivity coefficient, in order to better compare to more generalized models that are used to fit observations, as in our case for HOPS-370. We follow the prescriptions given by Lee et al. (2021b, 2024), where more detailed explanations can be found. We assume ambipolar diffusion is the main diffusion process, as discussed in Section 4.1.1. First of all, the accretion of angular momentum onto the protostellar disk is counteracted by magnetic braking to rapidly suppress the growth of the disk. This results in an equilibrium condition at the disk-envelope interface between the advection and magnetic braking timescales (\u03c4adv \u2243\u03c4br) given by \u03c4adv \u2243R ur , (A.1) \u03c4br \u2243\u03c1u\u03d5h BzB\u03d5 , (A.2) where ur and u\u03d5 are the infalling and rotational velocities, Bz and B\u03d5 are the poloidal (vertical) and toroidal (azimuthal) magnetic field components, R is the disk radius, \u03c1 is the density at the disk-envelope interface and h is the scale height of the disk at the edge. Next, B\u03d5 is generated by the induction of Bz through the differential rotation of the protostellar disk is vertically diffused by ambipolar diffusion. This results in another equilibrium condition between the generation of B\u03d5, which happens on the timescale of Faraday induction, and the vertical ambipolar diffusion timescales (\u03c4far \u2243\u03c4diff) given by \u03c4far \u2243B\u03d5h Bzu\u03d5 , (A.3) \u03c4diff \u2243h2 \u03b7AD , (A.4) where \u03b7AD is the ambipolar diffusivity coefficient. Since B\u03d5 should be the dominant component at the protostellar disk edge, we solve our first equilibrium condition for Bz in order to substitute it into our second equilibrium equation, giving Bz \u2243\u03c1u\u03d5urh RB\u03d5 . (A.5) Solving the second equilibrium equation in terms of our ambipolar diffusivity coefficient and substituting in our new relation for Bz gives \u03b7AD \u2243Bzu\u03d5h2 B\u03d5h \u2243 \u03c1u2 \u03d5urh2 RB2 \u03d5 . (A.6) We assume the rotational velocity (ur) and infall velocity (u\u03d5) both scale with the Keplerian velocity (vkep) as ur = \u03b4rvkep, (A.7) u\u03d5 = \u03b4\u03d5vkep, (A.8) where \u03b4r and \u03b4\u03d5 are the the scaling factors and vkep defined at the disk edge is vkep = \u0012GM R \u00131/2 , (A.9) where G is the gravitational constant and M = M\u22c6+Md is the mass of the star+disk system. From recent MHD simulations of protostellar disk formation including ambipolar diffusion, u\u03d5 is found to be very close to Keplerian at the disk edge (\u03b4\u03d5 \u22730.9), while ur can be significantly less (\u03b4r \u22720.5) than the Keplerian velocity, possibly by even a factor of a few (Lee et al. 2021a, Section 2.4). Substituting ur and u\u03d5 into our ambipolar diffusivity coefficient relation gives \u03b7AD \u2243 \u03b4r\u03b42 \u03d5G3/2M 3/2\u03c1h2 R5/2B2 \u03d5 . (A.10) Assuming vertical hydrostatic equilibrium, the scale height is related to the isothermal sound speed (Cs) as h = Cs \u0012 R3 GM \u00131/2 . (A.11) \f16 Thieme et al. Now, we can replace h in our ambipolar diffusivity coefficient equation to get a final relation of \u03b7AD \u2243 \u03b4r\u03b42 \u03d5G1/2C2 sR1/2M 1/2\u03c1 B2 \u03d5 . (A.12) This expression should be valid regardless of the global magnetic field orientation (see Appendix B for further discussion). We have left in the density and sound speed terms, which deviates from the further simplifications made by Hennebelle et al. (2016) and Lee et al. (2021b, 2024), since these quantities can be modeled for protostellar disks from molecular line observations. B. THE EFFECTS OF MAGNETIC FIELD INCLINATION ON THE AMBIPOLAR DIFFUSIVITY COEFFICIENT ESTIMATION For an inclined magnetic field, the equilibrium condition that needs to be satisfied follows as Bz \u221a R \u221a\u03b7AD\u03c1ur + Brh \u221a\u03b7AD\u03c1urR = 1, (B.1) where Br is the radial component of the magnetic field strength and the other symbols have the same meaning as in Appendix A (Lee et al. 2024). The relationships between the Br and Bz components are given by Br = B0 (2/\u03c0) sin i (B.2) Bz = B0 cos i (B.3) where i is the magnetic field inclination with respect to the disk rotation axis (i = 0\u25e6means the magnetic field direction is aligned/parallel withe disk rotation axis, i.e. the vertical case) and B0 characterizes the amount of magnetic flux that threads the disk region, while the local field strength can be significantly enhanced by magnetic induction due to vertical differential rotation (Lee et al. 2024). Using the relations of Bz and Br, along with Equations A.7, A.8 and A.11, we can re-write the previous equation as r3/4B0 cos i \u03b71/2 AD\u03c11/2\u03b41/2 r G1/4M 1/4 + r5/4B0 (2/\u03c0) sin i \u03b71/2 AD\u03c11/2\u03b41/2 r G3/4M 3/4C\u22121 s = 1, (B.4) where M = M\u22c6+ Md is the total mass of the star+disk system. Re-writing in terms of B0, we find B0 = \u03b71/2 AD\u03c11/2\u03b41/2 r \u0014 r3/4 cos i G1/4M 1/4 + r5/4 (2/\u03c0) sin i G3/4M 3/4C\u22121 s \u0015\u22121 , (B.5) When the magnetic field is inclined, then the magnetic field strength derived in Section 2.3.1 should not be assumed as one of the magnetic field components, but rather regarded as the total magnetic field strength. We can thus derive an ambipolar diffusivity equation in terms of the total magnetic field strength. The total magnetic field strength (Btot) is the sum of squares of all the components written as B2 tot = B2 r + B2 z + B2 \u03d5, (B.6) where Br and Bz can be substituted again using Equations B.2 and B.3 to give B2 tot = B2 0 h (2/\u03c0 sin i)2 + (cos i)2i + B2 \u03d5. (B.7) Now, we substitute B\u03d5 using our derived relationship in Equation A.12 to get B2 tot = B2 0 h (2/\u03c0 sin i)2 + (cos i)2i + \u03b4r\u03b42 \u03d5G1/2C2 sR1/2M 1/2\u03c1\u03b7\u22121 AD. (B.8) We now have two equations (B.5 and B.8) with two unknowns (B0 and \u03b7AD). Thus, we substitute Equation B.5 into Equation B.8 to remove B0 and obtain a secondorder polynomial ambipolar diffusivity equation for an inclined magnetic field of \u03b72 AD\u03c1\u03b4r \u0014 r3/4 cos i G1/4M 1/4 + r5/4 (2/\u03c0) sin i G3/4M 3/4C\u22121 s \u0015\u22122 \u00d7 h (2/\u03c0 sin i)2 + (cos i)2i \u2212\u03b7ADB2 tot + \u03b4r\u03b42 \u03d5G1/2C2 sR1/2M 1/2\u03c1 = 0, (B.9) which can be solved to find the ambipolar diffusivity coefficient. Using the same values as in Section 3.1 and a magnetic field inclination with respect to the disk rotation axis of 45 \u00b1 22\u25e6(Yen et al. 2021a), we find \u03b7AD = 1.798+0.042 \u22120.007 \u00d7 1019 cm2 s\u22121, where the reported errors are only due to the error on the magnetic field inclination angle. For one, this value is extremely close to and within error of the value previously derived in Section 3.1. Additionally, the errors due to only the magnetic field inclination are a few orders of magnitude smaller than in the previously derived value. This show that the magnetic field inclination has essentially no effect on our derived ambipolar diffusivity coefficient. For completeness, we check the corresponding values of B0, Bz, Br and B\u03d5 to see if which component of the magnetic field is dominant. We use Equations B.2, B.3, B.5 and B.7 to estimate values of B0 \u22485.5 mG, Bz \u22483.9 mG, Br \u22482.5 mG and B\u03d5 \u224828.0 mG. This shows that the B\u03d5 component of the magnetic field still dominates even when considering the orientation. Thus, our derived relation is considered to be generalized and it is correct to assume B\u03d5 \u2248Btot in our initial assumptions (Section 2.3.1). \fFirst Estimate of the Ambipolar Diffusivity Coefficient in HOPS-370 17 Figure C.1. Comparison between the best-fit disk and envelope volume density profiles for HOPS-370 (Tobin et al. 2020b). The vertical dashed line represents the radius of the Keplerian gas disk. C. AMBIPOLAR DIFFUSIVITY COEFFICIENT ESTIMATION USING INNER ENVELOPE DENSITY As described in Section 2.2.3, we could equally derive a density at the disk-envelope interface from the best fit envelope volume density relation in Tobin et al. (2020b). We stress that much of the envelope emission may be resolved-out, which could effect the fitting results. However, it is interesting to still investigate how the ambipolar diffusivity coefficient estimation is effected using the current best-fit results. To model the envelope emission using the molecular line data, Tobin et al. (2020b) uses the following relation for the envelope \u03c1env(r) = \u02d9 Menv 4\u03c0 \u0000GM\u22c6r3\u0001\u22121/2 \u00d7 \u0012 1 + \u00b5 \u00b50 \u0013\u22121/2 \u0012 \u00b5 \u00b50 + 2\u00b52 0 Rc r \u0013\u22121 , (C.1) where \u02d9 Menv is the envelope-to-disk mass-accretion rate, \u00b50 = cos \u03b80 is the initial polar angle of a streamline trajectory out to r \u2192\u221e, \u00b5 = cos \u03b8 is the polar angle along the streamline trajectory, Rc is the centrifugal radius where the infalling material has sufficient angular momentum to maintain an orbit the central protostar. We take the simplified case in the mid-plane of the inner envelope, where \u03b80 = \u03b8 = 90\u25e6, which simplifies the equation to \u03c1env(r) = \u02d9 Menv 4\u03c0 \u00002GM\u22c6r3\u0001\u22121/2 \u0012 1 + 2Rc r \u0013\u22121 . (C.2) We show the best-fit disk and envelope density profiles from Tobin et al. (2020b) in Figure C.1. We see a clear difference in densities between the disk and envelope (density jump). We plug in \u02d9 Menv = 3.2 \u00b1 0.6 \u00d7 10\u22125 M\u2299yr\u22121 (no error bars are reported, so we assume a 20% error), M\u22c6= 2.5 \u00b1 0.2 M\u2299, r = Rc = Rd = 94.4 \u00b1 12.6 au (Tobin et al. 2020b), we find \u03c1env(r) = 3.8 \u00b1 1.2 \u00d7 10\u221216 g cm\u22123. Using our new inner envelope density, we re-apply the same steps as in Section 2.3.1 to scale the magnetic field from the core-scale density using the C04 method. This gives us newly estimated magnetic field strength of Btot,e = 9.2 \u00b1 2.8 mG. We now plug in the values of \u03b4r = 0.8, \u03b4\u03d5 = 1.0, Cs = 833.0+19.5 \u221219.5 m s\u22121, Rd = 94.4+12.6 \u221212.6 au, M\u22c6= 2.5+0.2 \u22120.2 M\u2299, Md = 0.035+0.005 \u22120.003 M\u2299 \u03c1d = 3.8+1.2 \u22121.2 \u00d7 10\u221216 g cm\u22123 B\u03d5 = 9.2+2.8 \u22122.8 mG, into the ambipolar diffusivity coefficient equation (Equation 3) to obtain \u03b7AD = 1.7+1.2 \u22121.2 \u00d7 1019 cm2 s\u22121 = 2.4+1.6 \u22121.6 \u00d7 10\u22121 s. The dimensionless Els\u00a8 asser number is estimated to be AM = 1.7+0.8 \u22120.8 (C04 method). These results are indistinguishable from the values calculated using the disk edge quantities. Even though the envelope density is estimated to be an order of magnitude lower than the disk edge, the magnetic field strength is also lower as a result. Since \u03b7AD has a dependence on \u223c\u03c1 and \u223cB\u22122 \u03d5 , the values end up offsetting each other to give similar estimates. This shows that either the disk or envelope density can be used interchangeably to obtain a value for the ambipolar diffusivity coefficient. We again check whether B\u03d5 > Bz using Equation A.5, and find Bz \u22481.38 mG. Thus, B\u03d5 is still the dominant component, although it is more comparable to Bz in this case. D. MAGNETIC FIELD STRENGTH ESTIMATION Recently, Lee et al. (2024) derive a new analytical expression to describe how the magnetic field should scale with density inside a collapsing protostellar envelope for the first time. Considering the two density regimes ad the core and disk scales, their magnetic field density relation (Equation C7 in their paper) can be simplified \f18 Thieme et al. to B0,d = B0,c \u0012M\u22c6+ Md Mc \u00130.25 \u0012\u03c1d \u03c1c \u00130.525 , (D.1) which can also be used to scale the magnetic field strength down to inner envelope/protostellar disk density regimes (hereafter, referred to as the L24 method). Here B0 has the same meaning as in Appendix B and characterizes the magnetic flux threading the disk. We assume the case of a vertical magnetic field, since the effects of inclination are minimal on our estimates, which gives B0 \u2248Bz. Plugging in our known values of Btot,c, M\u22c6, Md, \u03c1d and \u03c1c, we estimate B0,d \u2248Bz = 17.6+6.3 \u22126.2 mG. This is compatible with the calculations in the main text within order of magnitude. The discrepancy results from model presumptions that require further examination, while we do not discuss the details.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.11795v1", |
| "title": "Prompt-Driven Feature Diffusion for Open-World Semi-Supervised Learning", |
| "abstract": "In this paper, we present a novel approach termed Prompt-Driven Feature\nDiffusion (PDFD) within a semi-supervised learning framework for Open World\nSemi-Supervised Learning (OW-SSL). At its core, PDFD deploys an efficient\nfeature-level diffusion model with the guidance of class-specific prompts to\nsupport discriminative feature representation learning and feature generation,\ntackling the challenge of the non-availability of labeled data for unseen\nclasses in OW-SSL. In particular, PDFD utilizes class prototypes as prompts in\nthe diffusion model, leveraging their class-discriminative and semantic\ngeneralization ability to condition and guide the diffusion process across all\nthe seen and unseen classes. Furthermore, PDFD incorporates a class-conditional\nadversarial loss for diffusion model training, ensuring that the features\ngenerated via the diffusion process can be discriminatively aligned with the\nclass-conditional features of the real data. Additionally, the class prototypes\nof the unseen classes are computed using only unlabeled instances with\nconfident predictions within a semi-supervised learning framework. We conduct\nextensive experiments to evaluate the proposed PDFD. The empirical results show\nPDFD exhibits remarkable performance enhancements over many state-of-the-art\nexisting methods.", |
| "authors": "Marzi Heidari, Hanping Zhang, Yuhong Guo", |
| "published": "2024-04-17", |
| "updated": "2024-04-17", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.AI", |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "Prompt-Driven Feature Diffusion for Open-World Semi-Supervised Learning", |
| "main_content": "Introduction Semi-supervised learning (SSL) has been widely studied as a leading technique for utilizing abundant unlabeled data to reduce the reliance of deep learning models on extensively labeled datasets [Tarvainen and Valpola, 2017; Laine and Aila, 2017]. Traditional SSL methodologies, operate under a crucial yet often unrealistic assumption: the set of classes encountered during training in the labeled set is exhaustive of all possible categories in the dataset [Zhu, 2005]. This assumption is increasingly misaligned with the dynamic and unpredictable nature of real-world data, where new classes can emerge without being labeled, creating a critical gap in the model\u2019s knowledge and adaptability [Bendale and Boult, 2015]. This gap underscores the necessity for an Open-World SSL (OW-SSL) setup [Cao et al., 2022], where the unlabeled data are not only from the classes observed in the labeled data but also cover novel classes that are previously unseen. The investigation of OWSSL is essential for maintaining the efficacy and relevance of machine learning models in real-world applications, where encountering new classes is not an exception but a norm. Diffusion models (DM), initially inspired by thermodynamics [Sohl-Dickstein et al., 2015], have gained significant popularity, particularly in the realm of generative models [Yang et al., 2023; Luo, 2022]. Their application has yielded remarkable success, outperforming established generative models like Variational Autoencoders (VAEs) [Kingma and Welling, 2013] and Generative Adversarial Networks (GANs) [Goodfellow et al., 2014], especially in the domain of image synthesis [Rombach et al., 2022]. Ongoing developments in DM have led to advancements such as higher-resolution image generation [Ho et al., 2020], accelerated training processes [Song et al., 2021], and reduced computational costs [Rombach et al., 2022]. Beyond image generation, recent efforts on diffusion models explore their application in image classification, incorporating roles as a zero-shot classifier [Clark and Jaini, 2023; Li et al., 2023], integration into SSL frameworks [You et al., 2023; Ho et al., 2023], and enhancing image classification within meta-training phases [Du et al., 2023]. This highlights the considerable extensibility of diffusion models. In this paper, we introduce a novel Prompt-Driven Feature Diffusion (PDFD) approach for Open-World Semi-Supervised Learning (OW-SSL), specifically designed to overcome the inherent challenges associated with the absence of labeled instances for novel classes in OW-SSL. Our approach harnesses the strengths of diffusion models to enhance effective feature representation learning from labeled and unlabeled data through instance feature denoising guided by predicted class-discriminative prompts. Recognizing the computational demands of traditional diffusion processes, the adopted featurelevel diffusion strategy offers enhanced efficiency and scalability compared to its image-level counterpart. Furthermore, feature-level diffusion operates in a representation space where the data is typically more abstract and generalizable, allowing the model to utilize the organized information present in labeled data and simultaneously adapt to new classes found within unlabeled data. A key aspect of PDFD is using class prototypes as prompts for the diffusion process. This choice is motivated by the generalizability of prototypes to novel, unarXiv:2404.11795v1 [cs.LG] 17 Apr 2024 \fseen classes, helping knowledge transfer from seen classes to unseen classes which is crucial in OW-SSL. Furthermore, we incorporate a distribution-aware pseudo-label selection strategy during semi-supervised training, ensuring proportionate representation across all classes. In addition, PDFD uses a class-conditional adversarial learning loss [Mirza and Osindero, 2014] to align the prompt-driven features generated by the diffusion process with class-conditional real data features, reinforcing the guidance of class prototypes for the diffusion process. This integration effectively bridges SSL classification and adversarial learning, leveraging the diffusion model to enhance the fidelity of feature representation in relation to specific classes. To empirically validate our approach, we conduct extensive experiments across multiple benchmarks in SSL, Open Set SSL, Novel Class Discovery (NCD), and OWSSL. The results demonstrate that the proposed PDFD model not only outperforms various comparison methods but also achieves state-of-the-art performance in these domains. The key contributions of this work can be summarized as follows: \u2022 We introduce a novel Prompt-Driven Feature Diffusion (PDFD) approach for OW-SSL, which enhances the fidelity and generalizability of feature representation for respective classes by leveraging the strengths of diffusion models with properly designed prompts. \u2022 We deploy a class-conditional adversarial loss to support feature-level diffusion model training, strengthening the guidance of class prototypes for the diffusion process. \u2022 We utilize a distribution-aware pseudo-label selection strategy, ensuring balanced class representation within an SSL framework, while class-prototypes are computed on selected instances based on prediction reliability. \u2022 Our comprehensive empirical results demonstrate the superiority of PDFD over a range of SSL, Open-Set SSL, NCD, and OW-SSL methodologies. 2 Related Works 2.1 Semi-Supervised Learning Traditional Semi-Supervised Learning (SSL) Traditional SSL has focused on training with both labeled and unlabeled data from seen classes, and classifying unseen test examples into these ground-truth classes. Deep SSL, which applies SSL techniques to deep neural networks, can be categorized into entropy minimization methods such as ME [Grandvalet and Bengio, 2004], consistency regularization methods such as Tempral-Ensemble [Laine and Aila, 2017] and Mean-Teacher [Tarvainen and Valpola, 2017], and holistic methods like FixMatch [Sohn et al., 2020], MixMatch [Berthelot et al., 2019] and ReMixMatch [Berthelot et al., 2020]. However, these approaches face challenges when training data includes unlabeled examples from unseen classes. Open-Set Semi-Supervised Learning Open-set SSL enhances conventional SSL by recognizing the existence of unseen class examples within the training data while maintaining the premise that unseen classes in the test examples are supposed to just be detected as outliers. The primary aim in this context is to diminish the detrimental impact that data from unseen classes might have on the classification performance of seen classes. To tackle this unique challenge, several recent methodologies have employed distinctive strategies for managing unseen class data. Specifically, DS3L [Guo et al., 2020] addresses this issue by assigning reduced weights to unlabeled data from unseen classes, while CGDL [Sun et al., 2020] focuses on improving data augmentation and generation tasks by leveraging conditional constraints to guide the learning and generation process. OpenMatch [Cao et al., 2022] employs one-vs-all (OVA) classifiers for determining the likelihood of a sample being an inlier, setting a threshold to identify outliers. However, a common limitation of these approaches is their inability to classify examples from unseen classes. Novel Class Discovery (NCD) In this setting, training data contains labeled examples from seen classes and unlabeled examples from novel unseen classes. Distinct from open-set SSL, NCD aims to recognize and classify both seen and unseen classes in the test set. This problem set-up, first introduced in [Han et al., 2019b], has developed into various methodologies, primarily revolving around a two-step training strategy. Initially, an embedding is learned from the labeled data, followed by a fine-tuning process where clusters are assigned to the unlabeled data [Hsu et al., 2018; Han et al., 2019b; Fini et al., 2021]. A key feature in NCD is the use of the Hungarian algorithm [Kuhn, 1955] for aligning classes in the labeled data. For instance, Deep Transfer Clustering (DTC) [Han et al., 2019b] harnesses deep learning techniques for transferring knowledge between labeled and unlabeled data, aiding in the discovery of novel classes. Another approach, RankStats [Han et al., 2019a], utilizes statistical analysis of data features to identify new classes. Open World Semi-Supervised Learning Distinct from NCD, OW-SSL encompasses labeled training data from the seen classes and unlabeled training data from both the seen and novel unseen classes, offering the capacity of exploiting the abundant unlabeled data from seen classes that are frequently available in real-world applications. As it has just been introduced recently [Cao et al., 2022], the potentials of OW-SSL have yet to be fully explored, and very few methods have been developed to address its unique challenges. ORCA [Cao et al., 2022] implements a cross-entropy loss function with an uncertainty-aware adaptive margin, aiming to reduce the disproportionate impact of the seen (known) classes during the initial phases of training. NACH [Guo et al., 2022] brings instances of the same class in the unlabeled dataset closer together based on inter-sample similarity. 2.2 Diffusion Models Diffusion Probabilistic Models (DMs) Originating from principles in thermodynamics, the stochastic diffusion processes were first introduced to data generation in DMs [SohlDickstein et al., 2015]. A notable advancement in recent research is Denoising Diffusion Probabilistic Models (DDPMs) proposed in [Ho et al., 2020]. DDPMs introduce a noise network that learns to predict a series of noise, enhancing the efficiency of DMs in generating high-quality image samples. Additionally, Denoising Diffusion Implicit Models (DDIM) were introduced, building upon DDPMs by incorporating a \fnon-Markovian diffusion process, resulting in an acceleration of the generative process [Song et al., 2021]. Latent Diffusion Models (LDMs) extend the diffusion process to the latent space, enabling DMs to be trained with more efficiency and on limited computational resources [Rombach et al., 2022]. They also introduced a cross-attention mechanism to DMs, providing the ability to incorporate conditional information in image generation. Nevertheless, training diffusion models in generating images is computationally intensive. Diffusion Models on Image Classification Diffusion Models on Image Classification is a newly emerging area that explores the potential of applying diffusion models to classification tasks. Both [Clark and Jaini, 2023] and [Li et al., 2023] consider the diffusion model as a zero-shot classifier. [Clark and Jaini, 2023] exploits pre-trained diffusion models and CLIP [Radford et al., 2021]. This approach involves generating image samples using text input, scoring, and classifying the image samples. Meanwhile, [Li et al., 2023] classifies image samples within the noise space. Exploring the application of diffusion models in semi-supervised learning tasks, [Ho et al., 2023] learns image classifier using pseudo-labels generated from the diffusion models. [You et al., 2023] uses the diffusion model as a denoising process to obtain bounding box outputs for pseudo-label generation in semi-supervised 3D object detection. [Du et al., 2023] introduces the concept of prototype-based meta-learning to diffusion models in image classification. During the meta-training phase, it leverages a task-guided diffusion model to gradually generate prototypes, providing efficient class representations. 3 Method 3.1 Problem Setup We consider the following OW-SSL setting. The training data comprises a labeled set Dl = {(xl i, yl i)}N l i=1 with N l instances, each paired with a corresponding one-hot label vector yl i , and an unlabeled set Du = {(xu i )}N u i=1 with N u instances. The set of classes present in the labeled set are referred to as seen classes, denoted as Ys, while the unlabeled data are sampled from a comprehensive set of classes Y, which includes both the seen classes Ys and additional unseen novel classes Yn, such that Y = Ys \u222aYn. The core challenge of OW-SSL is to learn a classifier from the training data that can accurately categorize an unlabeled test instance to any class in Y. We aim to learn a deep classification model that comprises a feature extractor f, parameterized by \u03b8feat, which maps the input data samples from the original input space X into a high level feature space Z, and a linear probabilistic classifier h, parameterized by \u03b8cls. The collective parameters of the deep classification model (h \u25e6f) are represented by \u03b8 = \u03b8feat \u222a\u03b8cls. 3.2 Diffusion Model Preliminaries Diffusion probabilistic models, often simply referred to as \u201cdiffusion models\u201d [Sohl-Dickstein et al., 2015; Ho et al., 2020], are a type of generative model characterized by a distinct Markov chain framework. The diffusion model comprises two primary processes: the forward process and the reverse process. The forward process (diffusion process) consists of a forward diffusion sequence, denoted by q(xt|xt\u22121), which represents a Markov chain that incrementally introduces Gaussian noise at each timestep t, starting from an initial clean sample (e.g., image) x0 \u223cq(x0). The forward diffusion process is described mathematically as: q(xT |x0) := T Y t=1 q(xt|xt\u22121), (1) where each step is defined via a Gaussian distribution: q(xt|xt\u22121) := N(xt; (1 \u2212\u03b2t)xt\u22121, \u03b2tI), (2) with \u03b2t representing a predefined variance schedule. By introducing \u03b1t := 1 \u2212\u03b2t and \u00af \u03b1t := Qt s=1 \u03b1s, one can succinctly express the diffused sample at any timestep t as: xt = \u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, (3) where \u03f5 is a standard Gaussian noise, \u03f5 \u223cN(0, I). Due to the intractability of directly reversing the forward diffusion process, q(xt\u22121|xt), the model is trained to approximate this reverse process through parameterized Gaussian transitions, denoted as p\u03d5(xt\u22121|xt), with \u03d5 as the model parameters. Consequently, the reverse diffusion is modeled as a Markov chain starting from a noise distribution xT \u223cN(0, I), and is defined as: p\u03d5(x0:T ) := p\u03d5(xT ) T Y t=1 p\u03d5(xt\u22121|xt), (4) where the transition probabilities are given by: p\u03d5(xt\u22121|xt) = N(xt\u22121; \u00b5\u03d5(xt, t), \u03c32 t I), (5) with \u00b5\u03d5(xt, t) = 1 \u221a\u03b1t \u0012 xt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03be\u03d5(xt, t) \u0013 (6) where \u03be is the diffusion model parameterized by \u03d5, predicting the added noise. In this context, the diffusion model is trained using an objective function defined as follows: L\u03d5 = Et,x0,\u03f5 h\r \r\u03f5 \u2212\u03be\u03d5 \u0000\u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, t \u0001\r \r2i (7) 3.3 Proposed Method In this section, we outline the proposed Prompt-Driven Feature Diffusion (PDFD) approach for OW-SSL. We present the method within a semi-supervised learning framework, where cross-entropy losses on the labeled data and the dynamically selected unlabeled data are jointly minimized. The key aspect of PDFD is to jointly train a feature-level diffusion model with class prototypes as prompts and the classification model through the minimization of a diffusion loss. This component is crucial for enhancing SSL by leveraging the strengths of diffusion models, ensuring semantic distinction and generalizability from the seen to the unseen classes. Furthermore, we incorporate a class-conditional adversarial loss to align the generated data from the diffusion model with the pseudo-labeled real data in the feature space Z, improving the alignment of feature representation for respective classes. The overall framework of PDFD is shown in Figure 1. Further elaboration will be provided below. \f... \u00a0 Prompt-Driven Feature-Level Diffusion SSL with Dynamic Pseudo-Label Selection Class-Conditional Adversarial Alignment Labeled Image Unlabeled Image Figure 1: The proposed PDFD framework trained on Dl, Du. The feature encoder f takes as input the labeled data and unlabeled data to generate their learned embeddings. The embeddings of the labeled and unlabeled samples are used to calculate the class prototypes which are used as prompts for the diffusion model. The diffusion model, guided by the loss Ldiff, predicts the noise \u03be\u03d5 from noisy features. Concurrently, the classifier h and encoder f are trained, aiming to minimize the supervised loss Ll ce and the pseudo-labeling loss Lu ce. Additionally, a class conditional-adversarial training component is integrated, wherein the generator \u03be\u03d5 aims to produce feature representations that successfully mislead the discriminator D\u03c8, assessed by the adversarial loss Ladv, into categorizing them as real features. SSL with Dynamic Pseudo-Label Selection We perform semi-supervised learning over the entire class set Y by minimizing the cumulative loss over the labeled and unlabeled training data to learn the parameters \u03b8 of the classification model. For the labeled data in Dl, we employ the following standard cross-entropy loss: Ll ce(\u03b8) = E(xl i,yl i)\u2208Dl[\u2113ce(yl i, h\u03b8cls(f\u03b8feat(xl i)))] (8) where \u2113ce denotes the cross-entropy loss function. For the unlabeled data in Du, we initially produce their pseudo-labels using K-means clustering. Then in each following training iteration, the current classification model is utilized to predict the pseudo-labels of each unlabeled instance xu i as follows: b yi = h\u03b8cls(f\u03b8feat(b xu i )) (9) where b yi denotes a soft pseudo-label vector\u2014i.e., the predicted class probability vector with length |Y|; b xu i denotes a weakly augmented version of instance xu i . By using weak augmentation, we aim to capture the underlying structure of the unlabeled data while minimizing the impact of potential noise or distortions. The corresponding one-hot pseudo-label vector e yi can be produced from b yi by setting the entry with the largest probability as 1 while keeping other entries as 0s. Moreover, in order to minimize the impact of noisy pseudolabels and ensure a proportionate representation of all classes in the unlabeled data, we propose to dynamically select confident pseudo-labels to produce a distribution-aware subset of pseudo-labeled instances for model training. Specifically, for each class c \u2208Y, we choose a subset of instances, Cc, with confidently predicted pseudo-labels via a threshold \u03c4: Cc = {xu i \u2208Du|1 \u0000max(b yi)>\u03c4 \u2227arg maxj b yij =c \u0001 } (10) where the indicator function 1 \u0000\u00b7) presents the condition for instance selection. The minimum number of instances selected for each class can then be determined as Nm = minc |Cc|. To ensure a well-proportioned consideration for all the classes Y, we finally choose the top Nm instances from each pre-selected subset Cc based on the predicted pseudolabel scores, max(b yi), and form a selected pseudo-labeled set Q = {(xi, e yi), \u00b7 \u00b7 \u00b7 } with size Nm \u00d7 |Y|. The training loss on the unlabeled data is then computed as the cross-entropy loss on the confidently pseudo-labeled instances in Q: Lu ce(\u03b8) = E(xi,\u02dc yi)\u2208Q[\u2113ce(\u02dc yi, h\u03b8cls(f\u03b8feat(xu i )))] (11) Class-Prototype Computation Prior to introducing the key feature-level diffusion component, we first compute the class-prototypes that will be adopted as essential prompts for guiding the diffusion process. In particular, class prototypes are derived from the feature embeddings produced from the deep feature extractor f based on the (predicted) class labels. They hence encapsulate the core characteristics of classes in the high level semantic feature space Z that are generalizable to novel categories. For the seen classes in Ys, we calculate the class prototypes as average feature representations of the labeled data for each class, providing a stable reference point for the whole class set Y. Specifically for each class s \u2208Ys, we compute its class prototype vector ps as follows: ps = E(xl i,yl i)\u2208Dl \u0002 1 \u0000arg maxj yl ij = s \u0001 f\u03b8feat(xl i) \u0003 (12) \fwhere the indicator function 1(\u00b7) selects the instances that satisfy the given conditions\u2014belonging to class s in this case. For the unseen novel classes in Yn, the prototypes are computed differently to account for the uncertainty during the discovery of new classes on unlabeled data. Specifically, for each class n \u2208Yn, its class prototype vector pn is computed as the average feature representation of the unlabeled instances whose pseudo-labels are confidently predicted as class n: pn = Exu\u2208Du \u0002 1 \u0000max(b yi) > \u03c4 \u2227arg maxj b yij = n \u0001 f\u03b8feat(xu i ) \u0003 (13) where the threshold \u03c4 is used to filter out non-confident predictions and reliably identify novel unseen classes in the unlabeled data. By putting all these class prototypes together, we can form a class prototype matrix P = [p1, \u00b7 \u00b7 \u00b7 , p|Y|], whose each column contains a class prototype vector. Prompt-Driven Feature-Level Diffusion Traditional diffusion processes, while powerful, are often computationally intensive and time-consuming, particularly when applied directly to high-dimensional data such as images. By transposing the diffusion process to the feature level, we significantly reduce the computational burden, enabling faster training of the diffusion model and scalability of PDFD to large datasets. In addition, feature-level diffusion focuses on the high-level representation space where the data is often more abstract and generalizable, while the semantic aspects of the data captured in this space are more relevant and informative for classification. Image-level diffusion might inadvertently emphasize pixel-level details that are less important for understanding the underlying class or concept. By operating at the feature level, the model can leverage the global and structural information to distinguish novel classes from seen classes. To leverage the strengths of the diffusion model for class distinction and recognition, we introduce the class prototypes as an additional input to the standard diffusion model \u03be\u03d5, functioning as class-distinctive prompts for feature diffusion. Specifically, the model is tasked with predicting the added noise \u03f5 based on a noisy input feature vector, the class-specific prompt, and the current time step t: \u03be\u03d5 = \u03be\u03d5(\u221a\u00af \u03b1tf\u03b8feat(xi) + \u221a 1 \u2212\u00af \u03b1t\u03f5, P \u00b7 1ci, t) (14) where 1ci denotes a one-hot vector that indicating the predicted class of the corresponding input xi, such that ci = arg maxj h\u03b8cls(f\u03b8feat(xi))[j]; while P \u00b7 1ci chooses the corresponding class prototype vector as the prompt input. Same as in the standard diffusion model, the term \u00af \u03b1t is a pre-defined variance schedule, and \u03f5 is a noise variable sampled from the normal distribution. Following [Du et al., 2023], we employ a transformer-based diffusion model for \u03be\u03d5. We jointly train the diffusion model \u03d5 and the classification model \u03b8 (feature extractor \u03b8feat and classifier \u03b8cls) over all the labeled and unlabeled training instances by minimizing the following diffusion loss: Ldiff(\u03d5, \u03b8) = Exi\u2208Dl\u222aDuEt\u223c[0:T ][\u2225\u03f5 \u2212\u03be\u03d5\u22252] (15) The loss essentially measures the discrepancy between the added noise \u03f5 and the prediction of the generative diffusion model, guiding both the feature extractor and the diffusion model to produce feature representations that are coherent with the class prototypes and therefore suitable for both seen and unseen class identification. Class-Conditional Adversarial Alignment The data generation in our PDFD model is depicted through a reverse diffusion process, where we transform a random noise vector \u03f5 in a sequence of T steps into meaningful feature vectors in the high-level feature representation space Z, guided by a class prototype based prompt. The process is mathematically represented as: zt\u22121 = ( \u03f5 if t = T, 1 \u221a\u03b1t \u0010 zt\u22121\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u00b7 \u03be\u03d5(zt, pc, t) \u0011 if t < T (16) where zt denotes the diffused feature embedding vector at time step t. For simplicity, we define this reverse diffusion process as a generative function g\u03d5(\u03f5, T, pc), which takes the initial noise vector \u03f5, the total number of time steps T, and the prompt pc as inputs, and generates a diffused clean feature vector z0: z0 = g\u03d5(\u03f5, T, pc) (17) Here, g conveniently encapsulates the iterative reverse diffusion process, transforming the initial noise \u03f5 into the refined feature representation z0 through a sequence of T steps of transformations governed by the specified prompt and the dynamics of the diffusion process in Eq.(16). In advancing our model\u2019s robustness and diffusion capacity, we propose to align the generated feature vectors with the unlabeled real training data in the high-level feature space Z through a class-conditional adversarial loss defined as follows: Ladv(\u03d5, \u03c8) = Ex\u223cDu[log D\u03c8(f\u03b8feat(x), \u02dc y)] + E\u03f5\u223cN (0,I),c\u223cY[log(1 \u2212D\u03c8(g\u03d5(\u03f5, T, pc), 1c))], (18) where D\u03c8 is a class-conditional discriminator parameterized by \u03c8, which tries to maximumly distinguish the feature vectors of the real data from the generated feature vectors using the reverse diffusion process given the conditional one-hot label vector. This adversarial loss is tailored to refine the model\u2019s ability to generate class-specific features. By playing a minimax adversarial game between the diffusion model \u03d5 and the discriminator \u03c8, min \u03d5 max \u03c8 Ladv(\u03d5, \u03c8), (19) this class-conditional adversarial alignment loss encourages the diffusion model to generate features that are indistinguishable from real data features, enhancing the fidelity of feature representation w.r.t respective classes across both the seen and unseen classes in Y. Joint Training of PDFD Incorporating the SSL losses on both labeled and unlabeled data sets, alongside the diffusion and adversarial losses, we formulate the joint training objective for our PDFD model as follows: min \u03b8,\u03d5 max \u03c8 Ltr = Ll ce + \u03b3uLu ce + \u03b3diffLdiff + \u03b3advLadv (20) where \u03b3u, \u03b3diff and \u03b3adv are trade-off hyper-parameters. \fTable 1: Classification accuracy (%) on CIFAR-10, CIFAR-100, and ImageNet-100. Classes Dataset SSL Open-Set SSL NCD Open-World SSL Fixmatch DS3L CGDL DTC RankStats ORCA NACH PDFD (ours) Seen CIFAR-10 71.5 77.6 72.3 53.9 86.6 88.2 89.5 90.2 CIFAR-100 39.6 55.1 49.3 31.3 36.4 66.9 68.7 70.2 ImageNet-100 65.8 71.2 67.3 25.6 47.3 89.1 91.0 91.3 Average 59.0 68.0 63.0 36.9 56.8 81.4 83.1 83.9 Unseen CIFAR-10 50.4 45.3 44.6 39.5 81.0 90.4 92.2 93.1 CIFAR-100 23.5 23.7 22.5 22.9 28.4 43.0 47.0 49.5 ImageNet-100 36.7 32.5 33.8 20.8 28.7 72.1 75.5 76.1 Average 36.9 33.9 33.6 27.7 46.0 68.5 71.6 72.9 All CIFAR-10 49.5 40.2 39.7 38.3 82.9 89.7 91.3 92.1 CIFAR-100 20.3 24.0 23.5 18.3 23.1 48.1 52.1 52.9 ImageNet-100 34.9 30.8 31.9 21.3 40.3 77.8 79.6 80.6 Average 34.9 31.7 31.7 26.0 48.8 71.9 74.3 75.2 4 Experiments 4.1 Experimental Setup Datasets We evaluate our model using established benchmarks in image classification: CIFAR-10, CIFAR-100 [Krizhevsky et al., 2009], and a subset of ImageNet [Deng et al., 2009]. The chosen ImageNet subset encompasses 100 classes, given its expansive size. Each dataset is partitioned such that the first 50% of the classes are considered \u2019seen\u2019 and the rest as \u2019novel\u2019. For these seen classes, we label 50% of the samples and the remainder constitutes the unlabeled set. The results presented in this study were obtained from evaluations on an unseen test set, which comprises both previously seen and novel classes, ensuring a comprehensive assessment of the model\u2019s performance. We repeated all experiments for 3 runs and reported the average results. Experimental Setup Following the compared methods [Cao et al., 2022; Guo et al., 2022], we pretrain our model using simSLR [Chen et al., 2020] method. In our experiments with the CIFAR datasets, we chose ResNet-18 as our primary backbone architecture. The training process involves Stochastic Gradient Descent (SGD) with a momentum value set at 0.9 and a weight decay factor of 5e-4. The training duration is 200 epochs, using a batch size of 512. Only the parameters in the final block of ResNet are updated during the training to prevent overfitting. For the ImageNet dataset, the backbone model selected is ResNet-50 employing standard SGD for training, with a momentum of 0.9 and a weight decay of 1e-4. We train the model for 90 epochs and maintain the same batch size of 512. Across all our experiments, we apply the cosine annealing schedule to adjust the learning rate. Specifically for PDFD we set \u03b3u to 0.5, \u03b3diff to 1, \u03b3adv to 1, \u03c4 to 0.5 and T to 50. Regarding the architecture of the diffusion model, we adopt a transformer-based model in line with the methodology outlined in [Du et al., 2023]. The discriminator consists of three linear layers, with the first two followed by batch normalization and a ReLU activation function. 4.2 Comparison Results We conducted a comprehensive comparison of our PDFD method with various state-of-the-art SSL methods across different settings, including Fixmatch [Sohn et al., 2020] for standard SSL, DS3L [Guo et al., 2020] and CGDL [Sun et al., 2020] for open-set SSL, DTC [Han et al., 2019b] and RankStats [Han et al., 2019a] for NCD, and ORCA [Cao et al., 2022] and NACH [Guo et al., 2022] for OW-SSL. The evaluation included datasets of varying scales, namely CIFAR10, CIFAR-100 [Krizhevsky et al., 2009], using Resnet-18 backbone and ImageNet-100 [Russakovsky et al., 2015] using Resnet-50 backbone. The results presented in this study were obtained from evaluations on an unseen test set, which comprises both previously seen and novel classes, ensuring a comprehensive assessment of the model\u2019s performance. The comparative results are presented in Table 1. The results for all classes illustrate method performance in an OW-SSL setting where both seen and unseen classes are included in the test set. Our PDFD method outperforms all comparison methods across all datasets. Notably, on the ImageNet-100 dataset, PDFD exhibits a significant improvement of 1.0% on all classes compared to the previous state-of-the-art method NACH. It also demonstrates a 0.8% margin of improvement over the second-best algorithm on the CIFAR-10 and CIFAR100 datasets. The results show an overall performance increase of 0.9% on the average of all three datasets. We also evaluated the effectiveness of the methods in classifying unseen classes. On unseen classes, PDFD outperforms the previous best method on all three datasets. PDFD performs exceptionally well on the CIFAR-100 dataset, surpassing the secondbest method with a significant improvement of 2.5% on unseen classes. On both CIFAR-10 and ImageNet-100 datasets, PDFD also surpasses the previous best methods, exhibiting a 0.9% and 0.6% increase in overall performance across all three datasets on unseen classes. Despite the special treatment of novel classes in the unlabeled dataset, PDFD also demonstrates strong performance in standard SSL tasks. PDFD outperforms all previous SSL methods, even on standard SSL tasks on seen classes. PDFD exhibits a similar pattern on the CIFAR-100 dataset as on unseen classes, with a significant improvement of 1.5% over the second-best algorithm. PDFD also shows a 0.8% increase compared to the secondbest method in the average classification accuracy across all three datasets, demonstrating the best overall performance. \fTable 2: Ablation Study on the effect of different types of prompt. classification accuracy (%) on CIFAR-100. Prompt Seen Unseen All h\u03b8cls(f\u03b8feat(xi)) 67.2 46.1 50.8 1c 69.2 47.8 52.0 P.1c (PDFD) 70.2 49.5 52.9 Table 3: Ablation Study classification accuracy (%) on CIFAR-100. Seen Unseen All PDFD 70.2 49.5 52.9 \u2212w/o Ll ce 57.6 24.9 45.5 \u2212w/o Lu ce 67.9 45.6 49.3 \u2212w/o Ldiff 67.1 46.4 48.7 \u2212w/o Ladv 68.0 46.9 50.1 \u2212w/o Ladv and Ldiff 66.6 45.2 47.7 \u2212w/o Class condition 68.1 47.1 50.7 4.3 Ablation Study Ablation on different prompts We conducted an ablation study to investigate employing different types of prompts in PDFD. We compared the classification accuracy on the CIFAR-100 dataset with the full PDFD model, which used prototype corresponding to class prediction (P.1c) as prompts, and two ablation variants. (1) \u201ch\u03b8cls(f\u03b8feat(xi))\u201d, which uses raw probability prediction output for the sample and (2) \u201c1c\u201d which uses one-hot encoding of the prediction output from the feature extractor. The results of the ablation study are presented in Table 2. Notably, utilizing prototypes as prompts achieved the highest accuracy among all three variants. Particularly in unseen classes, the use of prototypes significantly improved the classification performance. This finding suggests that class prototypes are a suitable way to implement prompts in our method, especially in enhancing the performance of PDFD in classifying unseen examples. Ablation on different components We conducted an ablation study to investigate the impact of different components in PDFD on the overall performance. The study focused on classification accuracy using the CIFAR-100 dataset, with six ablation variants: (1) \u201c\u2212w/o Ll ce\u201d excluding cross entropy loss on labeled data; (2) \u201c\u2212w/o Lu ce\u201d excluding cross entropy loss on unlabeled data; (3) \u201c\u2212w/o Ldiff\u201d excluding diffusion loss, disabling the feature-level diffusion model; (4) \u201c\u2212w/o Ladv\u201d excluding adversarial loss, disabling adversarial training; (5) \u201c\u2212w/o Ladv and Ldiff\u201d excluding both adversarial training and diffusion model; (6) \u201c\u2212w/o Class condition\u201d excludes the prompt in the diffusion model and class condition in adversarial training. The ablation study results are presented in Table 3. PDFD achieves the highest classification accuracy across seen, unseen, and all classes, emphasizing the effectiveness of all model components. Notably, excluding supervised learning (\u201c\u2212w/o Ll ce\u201d) results in the most decreased accuracy. Excluding the diffusion model (\u201c\u2212w/o Ldiff\u201d) significantly lowers accuracy on seen and all classes, emphasizing the importance of this model component. While further excluding adversarial training (\u201c\u2212w/o Ladv and Ldiff\u201d) does not markedly impact seen class accuracy, it does lead to reduced performance on un(a) Confidence difference between seen and unseen classes (b) Accuracy of pseudo-labels for unseen classes. Figure 2: Pseudo-Label Selection Analysis. (a) Confidence difference between seen and unseen classes during the training on CIFAR-100 (b) Effect of distribution-aware pseudo-label selection on learning unseen classes during the training on CIFAR-100. seen and all classes, supporting the goal of adversarial training to learn indistinguishable pseudo-labels for novel classes. The exclusion of cross entropy loss on unlabeled data (\u201c\u2212w/o Lu ce\u201d) results in a dramatic decrease in model performance on unseen and all classes. This finding supports the significance of each component in contributing to the effectiveness of PDFD. Pseudo-Label Selection Analysis Figure 2 illustrates the learning analysis of pseudo-labels throughout the training process. As depicted in subfigure (a), it is evident that the seen classes satisfy the confidence condition earlier than the unseen classes. Consequently, this leads to the under-representation of unseen classes in the initial stages of training, culminating in a suboptimal initialization of the model. This early skew towards seen classes can potentially bias the model\u2019s learning, impacting its ability to effectively recognize and adapt to the characteristics of the unseen classes as training progresses. In subfigure (b), the positive impact of our proposed component, distribution-aware pseudo-label selection, on the learning of unseen classes is visible. This method effectively addresses the initial imbalance observed in the learning process, enhancing the model\u2019s ability to recognize and accurately classify unseen classes. By considering the distribution characteristics of the data, our solution ensures a more equitable representation of classes in the training process, leading to improved model performance and generalization. 5 Conclusion In this paper, we proposed a novel Prompt-Driven Feature Diffusion (PDFD) approach to address the challenging setup of Open-World Semi-supervised Learning. The proposed PDFD approach deploys an efficient feature-level diffusion model with class-prototypes as prompts, enhancing the fidelity and generalizability of feature representation across both the seen and unseen classes. In addition, a class-conditional adversarial loss is further incorporated to support diffusion model training, strengthening the guidance of class prototypes for the diffusion process. Furthermore, we also utilized a distribution-aware pseudo-label selection strategy to ensure balanced class representation for SSL and reliable class-prototypes computation for the novel classes. We conducted extensive experiments on several benchmark datasets. Notably, our approach has demonstrated superior performance over a set of state-of-theart methods for SSL, open-set SSL, NCD and OW-SSL." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15625v1", |
| "title": "Optimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models", |
| "abstract": "The open-world test dataset is often mixed with out-of-distribution (OOD)\nsamples, where the deployed models will struggle to make accurate predictions.\nTraditional detection methods need to trade off OOD detection and\nin-distribution (ID) classification performance since they share the same\nrepresentation learning model. In this work, we propose to detect OOD molecules\nby adopting an auxiliary diffusion model-based framework, which compares\nsimilarities between input molecules and reconstructed graphs. Due to the\ngenerative bias towards reconstructing ID training samples, the similarity\nscores of OOD molecules will be much lower to facilitate detection. Although it\nis conceptually simple, extending this vanilla framework to practical detection\napplications is still limited by two significant challenges. First, the popular\nsimilarity metrics based on Euclidian distance fail to consider the complex\ngraph structure. Second, the generative model involving iterative denoising\nsteps is time-consuming especially when it runs on the enormous pool of drugs.\nTo address these challenges, our research pioneers an approach of Prototypical\nGraph Reconstruction for Molecular OOD Detection, dubbed as PGR-MOOD and hinges\non three innovations: i) An effective metric to comprehensively quantify the\nmatching degree of input and reconstructed molecules; ii) A creative graph\ngenerator to construct prototypical graphs that are in line with ID but away\nfrom OOD; iii) An efficient and scalable OOD detector to compare the similarity\nbetween test samples and pre-constructed prototypical graphs and omit the\ngenerative process on every new molecule. Extensive experiments on ten\nbenchmark datasets and six baselines are conducted to demonstrate our\nsuperiority.", |
| "authors": "Xu Shen, Yili Wang, Kaixiong Zhou, Shirui Pan, Xin Wang", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "Optimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models", |
| "main_content": "INTRODUCTION Molecular representation learning, which transforms molecules into low-dimensional vectors, has emerged as a critical and essential part of many biochemical problems, such as drug property prediction [14, 40] and drug design [21]. For handling the non-Euclidean molecules, graph neural networks (GNNs) have been widely applied to encode both node features and structural information based on message-passing strategy [7]. The embedding vectors of atoms and/or edges are then summarized to represent the underlying molecules and adopted to various downstream tasks [2, 11, 44]. The recent successes of molecular representation learning are often built on the assumption that training and testing graphs are from identical distribution. However, out-of-distribution (OOD) molecular graphs with different scaffolds or sizes, as shown in Fig. 1a, is unavoidable when the model is deployed in real-world arXiv:2404.15625v1 [cs.LG] 24 Apr 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. (a) ID and OOD molecules 0 50 100 150 200 250 300 Iters 0.0 0.2 0.4 0.6 0.8 Accuracy/Loss Train loss OOD test auroc ID test auroc (b) Loss and auroc Figure 1: (a) Illustration of OOD and ID molecules, which have different scaffolds or sizes, or both. (b) Vanilla GCN\u2019s performance declines rapidly when testing on OOD graphs, even though it performs well on ID graphs. scenarios [16]. Taking antibiotics screening as example, the training data consists of drugs inhibiting the growth of Gram-negative pathogens, while the testing data is mixed with antibiotics against Gram-positive ones [24]. Because of the different pharmacological mechanisms in treating bacteria, a reliable drug screening model should not only accurately identify more the in-distribution (ID) samples (e.g., Gram-negative), but also detect \u201cunknown\u201d OOD inputs (e.g., Gram-positive) to avoid misleading predictions during inference. As illustrated in Fig. 1b, a notable decline in GNNs\u2019 prediction accuracy is observed with OOD samples. This highlights the significance of OOD detection, which discerns between ID and OOD inputs, allowing the model to adopt appropriate precautions [13]. Prior arts of graph OOD detection can be roughly grouped into two categories. One line of the existing work aims to leverage the original classifier and fine-tune it to improve its detection ability [22, 26]. The another line is to redesign the scoring function to indicate ID and OOD cases [10, 43]. Nevertheless, these methods inevitably require modifications to the original molecular representation learning model, leading to a trade-off between OOD detection and ID prediction [6]. Recent advancements in computer vision have proposed the use of a diffusion model-based reconstruction approach for the unsupervised OOD detection, which typically involves an auxiliary generative model that approximates the ID distribution to reconstruct the input samples during testing phase [6, 8, 27]. Since the distribution of reconstructed samples is more biased towards ID than OOD, the disparity between original inputs and reconstructed outputs can be used as a judge metric for OOD detection. However, this kind of approach has never been practiced in the field of molecular graphs. We first design a naive model called GR-MOOD as shown in Fig. 2, to verify the feasibility of the reconstruction method for molecular OOD detection and draw a positive conclusion through experiments. However, the inherent complexity of molecular graphs, which are characterized by non-Euclidean structures, poses two significant challenges. First, this nature of molecular graphs renders conventional similarity metrics (e.g., Euclidean distance) less effective to quantify the closeness between original and reconstructed graphs. Meanwhile, the different molecules often undergo distribution shifts that include both structural and feature changes, further complicating the assessment of similarity. This leads to Challenge 1: Identifying an effective metric to evaluate the similarity between the Figure 2: Illustration of reconstruction-based OOD detection with the diffusion model. ID and OOD share different similarities with their respective reconstruction graphs and can be used as a score for OOD detection. original input and the reconstruction. More importantly, the diffusion models require hundreds or thousands of sampling steps to denoise from a normal standard distribution towards generating new graphs, which introduces additional complexity. Such extensive requirement becomes impractical, especially when performing reconstructing for a large volume of test samples. This leads to Challenge 2: Addressing the additional complexity of diffusion model required for reconstruction. Thus we propose a critical research question: How can we adopt reconstruction method to effectively and efficiently handle the unique properties of molecular graphs for OOD detection? In this paper, we introduce a groundbreaking OOD detection model, Prototypical Graph Reconstruction for Molecular OOd Detection (PGR-MOOD for short). For Challenge 1, concerning the identification of an effective metric for assessing the similarity between the original input and its reconstruction, PGR-MOOD adopts Fused Gromov-Wasserstein (FGW) distance [35], which utilizes both the structural and feature information of molecular graphs to enhance the measurement of their matching degree. To efficiently address Challenge 2, PGR-MOOD proposes to create a series of prototypical graphs that are closer to ID samples and away from OOD ones. We reduce the need of reconstructing every test graph and just compare its similarities with the prepared prototypical graphs. With this procedure, we can extend to the large-scale OOD detection. Our contributions are summarized as follows: \u2022 GR-MOOD Framework: We propose to detect OOD graphs from a novel perspective, i.e., via comparing the original molecules with their reconstructed outputs based on the diffusion model. The technical feasibility and challenges are analyzed empirically for this new framework. \u2022 PGR-MOOD Framework: To overcome the challenges of reconstruction measurement and generation efficiency, we propose a molecular detection method that contains a prototypical graphs generator and a similarity function based on FGW distance. In the testing phase, one only needs to measure the similarity between the prototypical graphs and the current inputs to identify OOD with lower values. \fOptimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY \u2022 SOTA Experimental Results: We conduct extensive analysis on ten benchmark molecule datasets and compare with six baselines. PGR-MOOD obtains the consistent superiority over other state-of-the-art models, delivering the average improvements of AUC and AUPR by 8.54% and 8.15%, 13.7% reduction on FPR95, and substantial savings in time and memory consumption. 2 RELATED WORK 2.1 Graph Neural Networks Since graph neural networks can use the topological structure and node properties of graphs for representation learning, they have become the most powerful method for processing graph data [1, 5, 45, 46], especially molecular graphs [39, 41]. GCN [18], the simplest but most efficient method, has been proved to be equivalent to the first-order approximation filter on graphs [12] and thus performs well in node classification [11] and link prediction [2]. On graph instance-related tasks, GIN [44] proves that GNN is as powerful as the 1-WL test and leverages an injective summation operation to increase performance. More and more researchers have proposed more representational methods, but they all ignore the performance and trustworthiness issues brought by OOD distribution [38, 42]. 2.2 Graph Generative Models Graph generative models aim to learn the distribution of the graph data and sample from it to generate novel graphs [47], especially for molecular graphs since it is related to many science issues [15, 20, 32]. Some graph generation methods are inspired by auto-regressive models, such as VAE-based [29] or normalizing flow-based models [19]. However, they are limited by the high computational cost and inability to model permutation invariance of graph [17]. Inspired by the diffusion models in computer vision [34], the same insight on graphs has developed in recent years [3, 30, 36]. Although diffusion models achieve state-of-the-art performance, they still suffer from inefficiencies caused by slow denoising processes [23]. 2.3 OOD Detection on Graphs Recently, many studies focus on graph OOD detection due to its importance. GOOD-D is the pioneering work for unsupervised OOD graph detection, which performs hierarchical contrastive learning to capture latent ID patterns and detects OOD graphs based on their semantic inconsistency [26]. GraphDE determines ID and OOD by inferring the environment variables of the graph generation process [22]. AAGOD aims to learn a parameterized amplifier matrix to emphasize the key patterns which helpful for graph OOD detection, thereby enlarging the gap between OOD and ID graphs [10]. Anomaly graph detection can also be seen as a special case of OOD detection, since anomaly graphs with anomaly structures and features can be caused by distribution shifts and many methods have been proposed to solve it [28, 31]. All of the above methods require redesigning or training well-performing GNNs on the ID datasets and inevitably lead to a trade-off between OOD detection and ID prediction. 3 PRELIMINARIES We define an undirected graph \ud835\udc3a= (\ud835\udc34,\ud835\udc4b) with \ud835\udc5bnodes, where \ud835\udc34\u2208R\ud835\udc5b\u00d7\ud835\udc5bis adjacency matrix to represent the graph topology, \ud835\udc4b\u2208R\ud835\udc5b\u00d7\ud835\udc51is feature matrix of all nodes with the dimensionality of \ud835\udc51.\ud835\udc3acan also be re-written by Optimal transmission (OT) format [37] to represent as a tuple (\ud835\udc34,\ud835\udc4b, \ud835\udf07), where \ud835\udf07\u2208R\ud835\udc5bis a vector of weights modeling the relative importance of the nodes and we define it as a uniform weight (1\ud835\udc5b/\ud835\udc5b). In addition, we define \ud835\udc37train as the training dataset that usually consists of ID graphs, and define \ud835\udc37test as the test dataset, which can be divided into in-distribution subset \ud835\udc37in test and out of distribution subset \ud835\udc37out test. 3.1 Out of Distribution Detection For OOD detection task, we aim to design a detector\ud835\udc54to distinguish whether the input graph \ud835\udc3ais an OOD sample or not: \ud835\udc54(\ud835\udc3a;\ud835\udf0f, \ud835\udc3d) = \u001a 0 (OOD), if \ud835\udc3d(\ud835\udc3a) \u2264\ud835\udf0f, 1 (ID), if \ud835\udc3d(\ud835\udc3a) > \ud835\udf0f. (1) where \ud835\udc3ddenotes a judging function to score the input molecules and \ud835\udf0fdenotes threshold for identifying the OOD samples. A desired OOD detector should assign judge scores with the maximum gap between ID and OOD samples. This target can be described as the following optimization: max \ud835\udc3d E\ud835\udc3a\u223c\ud835\udc37in test \ud835\udc3d(\ud835\udc3a) \u2212E\ud835\udc3a\u223c\ud835\udc37out test \ud835\udc3d(\ud835\udc3a). (2) Supposing the judge score distributions of ID and OOD have significant divergence, we can distinguish them with a simple intermediate threshold. For reconstruction-based OOD detection as shown in Fig. 2, the similarity between the input and the output molecules of diffusion model FM is often adopted as the judge function: \ud835\udc3d(\ud835\udc3a) = sim(FM(\ud835\udc3a),\ud835\udc3a), (3) where FM(\ud835\udc3a) is the reconstructed output and sim(\u00b7) is the similarity function. OOD inputs correspond to the lower reconstruction quality and therefore the lower similarity, while the similarity measurement is higher for the ID inputs. 3.2 Graph Neural Networks The typical GNNs are based on message passing paradigm. Specifically, the final representation of graph \ud835\udc3afor a \ud835\udc3f-layer GNNs is: \ud835\udc5a(\ud835\udc3f) \ud835\udc63 = MP \u0010 \ud835\udc5a(\ud835\udc3f\u22121) \ud835\udc63 , {(\ud835\udc5a(\ud835\udc3f\u22121) \ud835\udc62 ),\ud835\udc62\u2208\ud835\udc41(\ud835\udc63)} \u0011 , (4) \ud835\udc67\ud835\udc3a= Pooling \u0010n \ud835\udc5a(\ud835\udc3f) \ud835\udc63 | \ud835\udc63\u2208\ud835\udc3a o\u0011 , (5) where \ud835\udc5a(0) \ud835\udc63 = \ud835\udc4b\ud835\udc63is raw node feature, \ud835\udc41(\ud835\udc63) represents a set of neighbor nodes with respect to node \ud835\udc63, and MP is the message passing process that aggregates neighborhood features (e.g., sum, mean, or max) and combines them with the local node. GNNs iteratively perform MP to learn the effective node representations and utilize function Pooling to map all the node representations into the graph representations, which is a single vector. 3.3 Graph Generative Model The generative method based on the diffusion model consists of a forward diffusion process and a reverse denoising process. At the forward process, the model progressively adds noise to the original \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. AUROC AUPR FPR95 Metric 0 20 40 60 80 Score MSP GOOD-D GR-GOOD AUROC AUPR FPR95 Metric 0 20 40 60 80 Score MSP GOOD-D GR-GOOD Figure 3: Validation experiments performed in DrugOODIC50-Scaffold (left) and DrugOOD-EC50-Assay (right). data until a standard normal distribution. At the reverse process, the model learns the score function (i.e., a neural network) to remove the perturbed noise with the same amount of steps [4, 25, 34]. Given a graph \ud835\udc3a= (\ud835\udc34,\ud835\udc4b), we can use continuous time \ud835\udc61\u2208[0,\ud835\udc47] to index the diffusion trajectory {\ud835\udc3a\ud835\udc61= (\ud835\udc34\ud835\udc61,\ud835\udc4b\ud835\udc61)}\ud835\udc47 \ud835\udc61=1, such that \ud835\udc3a0 is the original input graph and \ud835\udc3a\ud835\udc47approximately follows the normal distribution. The forward process transforms \ud835\udc3a0 to \ud835\udc3a\ud835\udc47through a stochastic differential equation (SDE): d\ud835\udc3a\ud835\udc61= f\ud835\udc61(\ud835\udc3a\ud835\udc61)d\ud835\udc61+ \ud835\udc54(\ud835\udc61)dw, (6) where w is standard Wiener process [17], f\ud835\udc61(\u00b7) : G \u2192G is linear drift coefficient, \ud835\udc54(\ud835\udc61) : R \u2192R is a scalar function which represents the diffusion coefficient. f\ud835\udc61(\ud835\udc3a\ud835\udc61) and \ud835\udc54(\ud835\udc61) relate to the amount of noise dw added to the graph at each infinitesimal step d\ud835\udc61. In order to generate graphs that follow the distribution of \ud835\udc3a0, we start from \ud835\udc3a\ud835\udc47and utilize a reverse-time SDE for denoising from \ud835\udc47to 0: d\ud835\udc3a\ud835\udc61= \u0002 f\ud835\udc61(\ud835\udc3a\ud835\udc61) \u2212\ud835\udc54(\ud835\udc61)2S\ud835\udf03(\ud835\udc3a\ud835\udc61,\ud835\udc61) \u0003 d\ud835\udc61+ \ud835\udc54(\ud835\udc61)d \u00af w, (7) where S\ud835\udf03(\ud835\udc3a\ud835\udc61,\ud835\udc61) is score function to estimate the scores of perturbed graphs \u2207\ud835\udc3a\ud835\udc61log\ud835\udc5d\ud835\udc61(\ud835\udc3a\ud835\udc61) and \ud835\udc5d\ud835\udc61(\ud835\udc3a\ud835\udc61) is the marginal distribution under the forward process at time \ud835\udc61. In practice, two GNNs are utilized as the score function to denoise both node features and graph structures. \u00af w is a reverse time standard Wiener process. 4 RECONSTRUCTION OF PROTOTYPICAL GRAPH FOR OOD DETECTION In this section, we first propose a naive graph reconstruction method, termed as GR-MOOD, to analyze its potential and limitations for molecular graph OOD detection. Then, we propose a novel approach of PGR-MOOD to reconstruct the prototypical graphs of ID samples for effective and efficient OOD detection. 4.1 GR-MOOD Inspired by the generative methods [6, 27], we design a vanilla graph reconstruction model (GR-MOOD) for molecular graph OOD detection. GR-MOOD is pre-trained on a large-scale compound dataset (e.g., QM9 or ZINC) and fine-tuned on \ud835\udc37train. Considering input graph \ud835\udc3a\u2208\ud835\udc37test, we utilize GR-MOOD to perturb and reconstruct it via: \ud835\udc3ao = diffuse(\ud835\udc3a,\ud835\udf03,\ud835\udc47), (8) \u02c6 \ud835\udc3a= denoise(\ud835\udc3ao,\ud835\udf03,\ud835\udc47), (9) where \ud835\udf03is the parameters of GR-MOOD, and\ud835\udc47is the iteration numbers. Function diffuse(\u00b7) applies Eq. (6) to introduce perturbations that transform \ud835\udc3ainto a noised state \ud835\udc3ao, while function denoise(\u00b7) 100 300 500 700 1000 Iteration 0 20 40 60 80 Score 50 100 150 200 250 300 Time(s) DrugOOD-EC50-Scaffold AUROC AUPR FPR95 Time (a) Performance and Time change with the iteration 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 OOD judge score 0.0 0.5 1.0 1.5 2.0 2.5 Frequency in-distribution out-of-distribution (b) Reconstruction score distribution for ID and OOD Figure 4: Experiments on DrugOOD. (a) Diffusion model requires a large number of iterations to obtain an effective reconstruction. (b) The reconstruction does not yield the discriminative results as expected. utilizes Eq. (7) to reverse the process, effectively denoising \ud835\udc3ao to generate reconstruction graph \u02c6 \ud835\udc3a. Upon acquiring the reconstruction graph \u02c6 \ud835\udc3a, we utilize a GNN well-trained on the ID dataset to encode both the feature and structure information of \ud835\udc3aand \u02c6 \ud835\udc3a, whose representations are denoted as \ud835\udc67and \u02c6 \ud835\udc67, respectively. The cosine similarity between them is treated as OOD judge score and is defined in Eq. (3): sim(\ud835\udc3a, \u02c6 \ud835\udc3a) = \ud835\udc67\u00b7 \u02c6 \ud835\udc67 \u2225\ud835\udc67\u2225\u00d7 \u2225\u02c6 \ud835\udc67\u2225. (10) To validate GR-MOOD effectiveness, we conduct experiments on two DrugOOD datasets [16]. As shown in Fig. 3, the performance of GR-MOOD is comparable (e.g., AUROC and AUPR) or even outperforming (e.g., the smaller score of FPR95 is better) than the SOTA method of GOOD-D [26]. The underlying principle is that since GR-MOOD is trained to reconstruct graphs that align with the ID distribution, OOD samples, due to their inherent dissimilarity from the ID distribution, will typically undergo poorer reconstruction when being processed. Such discrepancy is quantified as a lower judge score, which signals the presence of an OOD sample. This mechanism highlights the critical role of diffusion model based reconstruction method in identifying graphs that do not conform to the expected distribution, thereby providing a quantitative basis for distinguishing between ID and OOD samples. Limitation of GR-MOOD: Despite the intuitive promise of GRMOOD, our evaluation reveals the non-negligible limitations in terms of its time efficiency and reconstruction quality measurement. First, the primary constraint of GR-MOOD is due to the inherent structural complexity of molecular graphs. As illustrated in Fig. 4a, this complexity requires the diffusion model to take an extensive amount of denoising steps to fulfill the reconstruction, improving model performance at the expense of efficiency. Even worse, repeating the generation process for each molecules makes it challenging to scale in the testing phase, which has to screen on a large pool of molecule candidates. Second, another issue pertains to the adequacy of the similarity function employed in our model. As depicted in Fig. 4b, the reconstruction similarity distributions of ID and OOD samples calculated based on Eq. (10) are not significantly different 1. Since graphs embody as non-Euclidean data, the 1There are similar sub-structures among the molecular graphs (e.g., functional groups like benzene rings), resulting in close representations of the OOD and ID samples. \fOptimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY standard metrics such as cosine similarity impedes the ability to accurately capture the nuances of molecular structure and node features among the molecules. This limitation can result in the consequential loss of detection accuracy. 4.2 PGR-MOOD To address the limitations of GR-MOOD, we propose a novel approach based upon diffusion model, PGR-MOOD (Prototypical Graph Reconstruction for Molecular OOD Detection). The innovation of PGR-MOOD has three aspects: A strong similarity function, a prototypical graphs generator, and an efficient and scalable OOD detector. The architecture of PGR-MOOD is shown in Fig. 5. A Strong Similarity Function based on FGW. The cosine similarity metric is oriented towards quantifying the angular divergence between two vectors, while it is not suitable for non-Euclidean data such as graphs. In fact, measuring the similarity between graphs is equivalent to calculating the their matching degree, the higher the matching degree, the more similar they are. Fused GromovWasserstein (FGW) distance has been proved particularly advantageous for the measurement between graphs. It achieves a balance between the optimal transport (OT) distance with a cost on node features and the Gromov-Wasserstein (GW) distance among the toplogical structures. Specifically, FGW treats the graph associated with topology and node feature as a probability distribution. It allows for the computation of costs between two distributions with optimal coupling, serving as a distance measure between graphs. For two graphs represented in OT format, \ud835\udc3a1 = (\ud835\udc341,\ud835\udc4b1, \ud835\udf071) and \ud835\udc3a2 = (\ud835\udc342,\ud835\udc4b2, \ud835\udf072), their FGW distance is defined as: FGW\ud835\udefc(\ud835\udc3a1,\ud835\udc3a2) = min \u03a0(\ud835\udf071,\ud835\udf072) \u2211\ufe01 \ud835\udc56\ud835\udc57\ud835\udc58\ud835\udc59 (\ud835\udefc(\ud835\udc341(\ud835\udc56, \ud835\udc57) \u2212\ud835\udc342(\ud835\udc58,\ud835\udc59))2 (11) + (1 \u2212\ud835\udefc)\u2225\ud835\udc4b1(\ud835\udc56) \u2212\ud835\udc4b2(\ud835\udc58)\u22252 2)\ud835\udf0b\ud835\udc56\ud835\udc58\ud835\udf0b\ud835\udc57\ud835\udc59, where \ud835\udc341(\ud835\udc56, \ud835\udc57) represents the element of the \ud835\udc56-th row and \ud835\udc57-th column in \ud835\udc341, \ud835\udc4b1(\ud835\udc56) represents the \ud835\udc56th row vector of \ud835\udc4b, \ud835\udefc\u2208[0, 1] is a parameter to balance the structure term and the feature term, \u03a0(\ud835\udf071, \ud835\udf072) = {\ud835\udf0b\u2208\ud835\udc45\ud835\udc5a\u00d7\ud835\udc5b + s.t., \u00cd\ud835\udc5a \ud835\udc56=1 \ud835\udf0b\ud835\udc56,\ud835\udc57= \ud835\udf072(\ud835\udc57), \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udf0b\ud835\udc56,\ud835\udc57= \ud835\udf071(\ud835\udc56)} is the set of all admissible couplings between \ud835\udf071 and \ud835\udf072. FGW(\u00b7) metric exhibits optimal performance in directly discerning both structural variances and feature disparities between graphs. A Prototypical Graphs Generator. The naive diffusion model of GR-MOOD reconstructs graph that favors the distribution of the input samples, instead of following the distribution learned during the training phase. It misleads the detector\u2019s judgment on the OOD samples. To address this challenge, we propose a prototypical graphs generator, which generates prototypical graphs satisfying the following two properties: \u2780For any input graph \ud835\udc3ain \u2208\ud835\udc37in, where \ud835\udc37in represents all ID graphs, the prototypical graph ought to closely resemble the graph \ud835\udc3ain. \u2781For any input \ud835\udc3aout \u2208\ud835\udc37out, where \ud835\udc37out represents all OOD graphs, the prototypical graph should exhibit significant deviation from the graph \ud835\udc3aout. Consequently, the goal is to generate a prototypical graph \ud835\udc3awhich is close to the ID graphs and far away from the OOD graphs. To satisfy Property \u2780, Eq. (11) is utilized as the distance metric, and the loss function LID is formulated to guide the denosing process at the generator: LID = E\ud835\udc3ain\u223c\ud835\udc37in train [FGW(\ud835\udc3ain,\ud835\udc3a)]. (12) Similarly to comply with Property \u2781, we introduce loss function LOOD to enhance the distance between \ud835\udc3afrom OOD samples: LOOD = \u2212E\ud835\udc3aout\u223c\ud835\udc37out train [FGW(\ud835\udc3aout,\ud835\udc3a)]. (13) Note that OOD graphs \ud835\udc3aout are unreachable during the training phase, precluding the direct formulation of LOOD. Consequently, it becomes imperative to synthesize graphs as proxies for the absent OOD samples. Recalling the pre-trained diffusion model FM in Eq. (7), it adopts socre function S\ud835\udf03to generate graph. The parameter weights of S\ud835\udf03is given by \ud835\udf03M = {\ud835\udf03(\ud835\udc59) M }\ud835\udc3f \ud835\udc59=1, where \ud835\udf03(\ud835\udc59) M represent the parameters of the\ud835\udc59-th score function. We propose to directly perturb parameters \ud835\udf03M for generating OOD graphs \ud835\udc3aout: \u02dc \ud835\udf03M = {\ud835\udf03(\ud835\udc59) M (\ud835\udc3c+ \ud835\udefc\ud835\udc43(\ud835\udc59))}\ud835\udc3f \ud835\udc59=1, (14) where \ud835\udefc> 0 is perturbation strength, \ud835\udc3cis identity matrix, and \ud835\udc43(\ud835\udc59) is perturbation matrix. By perturbing the parameters \ud835\udf03M, a new score function S \u02dc \ud835\udf03(\u00b7) is derived. Experimental observations (w/o LOOD of Table 2) reveal that S \u02dc \ud835\udf03(\u00b7) can induce a deviation in the denoising trajectory away from the original data distribution, thereby enabling the diffusion model to generate \ud835\udc3aout during the training phase. In light of these researches, a composite loss function Lguide is formulated by integrating both LOOD and LID: Lguide = LID + LOOD. (15) It is leveraged to guides the training of Prototypical Graphs Generator FPG, which has the same architecture and initial parameters \ud835\udf03with FM, to generate prototypical graph \ud835\udc3a. The generation of \ud835\udc3aby FPG unfolds in two phases: Firstly, in contrast to generating directly from Gaussian noise, a graph \ud835\udc3a0 from \ud835\udc37train is randomly chosen as the start point of generation. We then add \ud835\udc47-step noise according to Eq. (6) to get the final noise graph \ud835\udc3a\ud835\udc47(i.e., \ud835\udc3a0 \u2192\ud835\udc3a\ud835\udc47). Secondly, Lguide guides the denoising step of diffusion model to generate prototype graph \ud835\udc3a: d\ud835\udc6e\ud835\udc61= [f\ud835\udc61(\ud835\udc6e\ud835\udc61) \u2212\ud835\udc54(\ud835\udc61)2(S\ud835\udf03(\ud835\udc6e\ud835\udc61,\ud835\udc61)\u2212 (16) \u2207\ud835\udc3a\ud835\udc61Lguide(\ud835\udc6e\ud835\udc61))]d\ud835\udc61+ \ud835\udc54(\ud835\udc61)dw, where \ud835\udc61is the indicator of the denoise step and varies from \ud835\udc47to 0. The prototype graph \ud835\udc3agenerated by the above equation can be viewed as the reconstruction of both ID and OOD graphs, but has better discrimination than the reconstruction generated in GRMOOD. To further reduce the computation, rather than utilizing the entirety of \ud835\udc37train, a fixed batch-size dataset \ud835\udc37batch is employed for the computation of LID. Each \ud835\udc37batch can generate one \ud835\udc3a, and they are combined to formulate a list \ud835\udc43\ud835\udc3f= {\ud835\udc3a(\ud835\udc56)}\ud835\udc3c \ud835\udc56=1, \ud835\udc3c= \u2308|\ud835\udc37train| |\ud835\udc37batch| \u2309. An Efficient and Scalable OOD Detector. Diffusion models require significant time and memory resources during the testing phases because they need to generate a reconstructed graph for each input. To alleviate this computational burden, PGR-MOOD eliminates the necessity of graph reconstruction in the testing phase \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. Figure 5: Overview of the proposed PGR-MOOD method. In the training phase, we utilize a pre-trained diffusion model to generate OODs, then calculate Lguide with OODs and training graphs. Under the guide of Lguide, the prototypical graphs generator generates prototypical graphs \ud835\udc3aas the reconstruction of testing inputs. In the testing phase, we utilize \ud835\udc3ato calculate the similarity between testing graphs as the OOD judge score. via preparing the prototypical graphs in the training phase. PGRMOOD leverages the \ud835\udc3awithin list \ud835\udc43\ud835\udc3fto conduct the similarity measurement with every new test sample. The maximum similarity is employed as the definitive judge score for OOD detection: \ud835\udc3d(\ud835\udc3a) = max \ud835\udc3a\u223c\ud835\udc43\ud835\udc3f [sim(\ud835\udc3a,\ud835\udc3atest)], \ud835\udc3atest \u2208\ud835\udc37test. (17) where sim (\u00b7) is the similarity function based on the inverse of FGW distance. Algorithm 1 PGR-MOOD Input: A Pre-trained diffusion models FM; The data loader of indomain training set \ud835\udc37train; An empty prototypical graphs lists \ud835\udc43\ud835\udc3f; Denoise step \ud835\udc47. Output: Prototypical graphs lists \ud835\udc43\ud835\udc3f; 1: Utilize Eq. (14) to perturb the parameters of FM to get \u02dc \ud835\udf03; 2: Generate \ud835\udc3aOOD though FM with parameters \u02dc \ud835\udf03; 3: for \ud835\udc3abatch in \ud835\udc37train do 4: Random select a graph \ud835\udc3a0 from \ud835\udc3abatch; 5: Utilize Eq. (6) to calculate noise graph \ud835\udc3a\ud835\udc47with \ud835\udc3a0; 6: for t in T to 1 do 7: Compute Lguide with \ud835\udc3abatch and \ud835\udc3aOOD. 8: Perform denoise steps in Eq. (16) with Lguide and \ud835\udc3a\ud835\udc61. 9: end for 10: Add \ud835\udc3ato \ud835\udc43\ud835\udc3f; 11: end for 5 EXPERIMENT In this section, we verify the effectiveness of PGR-MOOD and GRMOOD by performing experiments on two graph OOD benchmarks. 5.1 Experiment Setup 5.1.1 Datasets. With the increasing attention on OOD detection in the molecular graphs, two benchmarks are proposed, GOOD [9] and DrugOOD [16], respectively. These two benchmarks provide the detailed rules to distinguish between ID and OOD. GOOD is built based on the scaffold and size of the molecular graph, and DrugOOD adds an assay on the basis of these two distribution shifts. We take six datasets from DrugOOD and four datasets from GOOD as our experimental datasets. Please see Appendix A.1 for details. 5.1.2 Baselines Methods. To verify the performance of our methods, namely GR-MOOD and PGR-MOOD, we use the GNNs\u2019 Max Softmax Score (MSP) [13] as a vanilla baseline and then compare with three SOTA graph OOD detection methods (GOOD-D [26], AAGOD [10], and GraphDE [22]). Meanwhile, two graph anomaly detection methods, namely OCGIN [31] and GLocalKD [28], are introduced as the baseline. In addition, as the first molecular graph OOD detection method based on the diffusion model, we also compare the PGR-MOOD with the naive solution GR-MOOD to verify whether its limitations have been solved. Please see Appendix A.2 for details. 5.1.3 Implementation Details. For our methods, we utilize the diffusion model GDSS [17] as the backbone which achieves stat-ofthe-art performance on graph generation. GDSS is pre-trained on the QM9 dataset, which comprises a large collection of organic molecules with 113k samples. Following the setting of GraphDE, we perform 10 random trials and report the average accuracy on the test set, along with 95% confidence intervals. During training, we set \ud835\udefcto 0.5 to balance the topological structure and node features when computing the FGW distance. We set \ud835\udc37\ud835\udc4f\ud835\udc4e\ud835\udc61\ud835\udc50\u210eto 128 and the number of perturbation steps \ud835\udc47\u2208[1, 10] to reduce memory allocation and computation complexity. For all baseline methods, we follow settings reported in their papers. All the experiments are implemented by PyTorch, and run on an NVIDIA TITAN-RTX (24G) GPU. 5.2 Performance Analysis Q: Whether PGR-MOOD achieves the best performance on the OOD detection in molecular graphs? Yes, we utilize the new loss function Lguide to guide the diffusion model to generate prototypical graphs that are more representative of all ID samples, and more easily detect OOD samples. \fOptimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 1: OOD detection performance on the DrugOOD dataset. Scaffold, Size, and Assay are the basis for dividing ID and OOD graphs. The best and runner-up results are highlighted with bold and underline, respectively. DrugOOD-IC50 Scafflod Size Assay OOD Detector AUROC \u2191 AUPR \u2191 FPR95 \u2193 AUROC \u2191 AUPR \u2191 FPR95 \u2193 AUROC \u2191 AUPR \u2191 FPR95 \u2193 MSP 54.57\u00b19.18 52.43\u00b16.85 90.76\u00b14.95 52.57\u00b19.07 57.23\u00b13.25 88.60\u00b14.75 58.19\u00b17.23 56.38\u00b15.75 89.20\u00b13.05 GOOD-D 85.40\u00b11.23 87.13\u00b12.31 27.40\u00b12.37 91.55\u00b11.10 87.91\u00b13.74 16.95\u00b10.47 81.35\u00b11.74 79.05\u00b10.79 75.02\u00b10.57 GraphDE 69.15\u00b11.11 67.40\u00b10.51 80.30\u00b10.33 78.72\u00b11.78 79.36\u00b11.24 78.97\u00b10.75 68.56\u00b11.08 66.56\u00b10.31 82.20\u00b10.93 AAGOD 84.23\u00b12.97 83.96\u00b11.34 21.56\u00b11.08 84.75\u00b11.23 83.32\u00b11.61 19.80\u00b10.93 71.94\u00b11.45 72.86\u00b11.84 85.62\u00b12.71 OCGIN 68.39\u00b14.77 66.05\u00b15.11 82.80\u00b17.50 70.94\u00b15.09 68.99\u00b13.72 74.80\u00b16.46 67.53\u00b14.61 66.95\u00b15.23 79.80\u00b14.60 GLocalKD 63.42\u00b10.60 58.03\u00b10.64 70.28\u00b11.83 69.44\u00b10.58 67.29\u00b10.77 81.13\u00b11.46 62.08\u00b10.76 61.93\u00b10.61 82.70\u00b11.98 GR-MOOD 78.82\u00b12.31 77.35\u00b11.94 25.43\u00b11.72 68.51\u00b12.65 69.19\u00b13.01 70.78\u00b12.33 61.91\u00b11.87 62.95\u00b11.54 84.87\u00b11.39 PGR-MOOD 91.57\u00b11.32 90.12\u00b10.71 19.42\u00b10.22 93.84\u00b11.53 94.85\u00b12.03 15.57\u00b11.03 83.72\u00b12.51 80.31\u00b11.44 64.65\u00b10.57 Improve +7.22% +3.43% -9.89% +2.50% +7.08% -8.41% +2.91% +1.52% -13.80% DrugOOD-EC50 Scafflod Size Assay OOD Detector AUROC \u2191 AUPR \u2191 FPR95 \u2193 AUROC \u2191 AUPR \u2191 FPR95 \u2193 AUROC \u2191 AUPR \u2191 FPR95 \u2193 MSP 57.26\u00b17.25 57.08\u00b15.94 87.26\u00b15.12 59.18\u00b18.77 58.41\u00b14.95 83.76\u00b15.60 48.19\u00b19.18 46.38\u00b16.85 89.26\u00b14.95 GOOD-D 82.51\u00b11.31 81.98\u00b12.71 63.21\u00b12.89 92.50\u00b11.32 88.37\u00b11.26 19.20\u00b10.51 65.20\u00b11.48 67.22\u00b11.61 92.24\u00b13.56 GraphDE 68.55\u00b11.03 66.56\u00b11.90 82.20\u00b10.74 79.64\u00b11.16 77.75\u00b11.48 59.25\u00b10.57 66.24\u00b11.79 66.28\u00b10.98 80.29\u00b11.04 AAGOD 77.17\u00b15.52 75.32\u00b15.56 72.76\u00b14.95 78.72\u00b16.59 79.23\u00b16.30 68.66\u00b15.43 74.57\u00b19.18 72.43\u00b16.85 71.83\u00b14.43 OCGIN 69.01\u00b13.98 67.83\u00b14.87 74.79\u00b17.50 78.45\u00b15.17 74.30\u00b13.96 81.53\u00b15.64 71.33\u00b12.85 70.94\u00b13.69 80.93\u00b13.55 GLocalKD 66.59\u00b10.71 68.64\u00b10.45 71.22\u00b11.01 69.59\u00b10.98 68.72\u00b10.83 68.70\u00b11.36 73.32\u00b11.65 69.23\u00b11.57 75.39\u00b12.19 GR-MOOD 71.15\u00b12.50 73.02\u00b13.21 81.79\u00b13.58 73.80\u00b12.95 78.49\u00b11.63 70.96\u00b11.82 60.17\u00b11.56 61.69\u00b110.27 79.09\u00b11.33 PGR-MOOD 87.53\u00b11.31 86.16\u00b10.72 62.82\u00b12.21 97.67\u00b11.54 96.32\u00b11.47 13.79\u00b11.23 86.73\u00b13.34 83.56\u00b13.28 63.74\u00b12.59 Improve +6.02% +5.09% -3.70% +5.58% +8.41% -28.10% +16.30% +15.36% +11.22% Table 2: Ablation experiment results on four datasets. AUROC \u2191 AUPR \u2191 FPR95 \u2193 Dataset w/o LID w/o LOOD w/o FGW w/o LID w/o LOOD w/o FGW w/o LID w/o LOOD w/o FGW DrugOOD-EC50 -4.57 -2.43 -0.76 -7.72 -2.32 -4.75 +5.74 +2.22 +1.63 DrugOOD-IC50 -5.14 -1.75 -1.24 -4.26 -1.98 -3.62 +6.83 +1.77 +2.36 GOOD-HIV -3.26 -2.58 -0.54 -5.83 -2.43 -3.18 +4.72 +2.03 +2.61 GOOD-PCBA -5.89 -1.08 -2.07 -6.44 -3.70 -4.81 +3.62 +1.12 +2.14 \u25b7Comparison with the naive solution. As shown in Table 1 and Table 3, compared with GR-MOOD on six datasets of DrugOOD, PGR-MOOD enhances the average AUC and AUPR by 32.76% and 29.54%, and reduces the average FPR95 by 45.65%. These results demonstrate that the prototypical graphs of PGR-MOOD generated with the FGW similarity function are more suitable for distinguishing the original input graphs in the testing phase. \u25b7Comparison with the State-of-the-art Methods. To verify the superiority of our method, we compare it with the previous SOTA methods. As shown in the last row of Table 1 and Table 3, our method achieves SOTA results on all datasets. The average improvements against the previous SOTA are 8.54% of AUC and 8.15% of AUPR, and the average reduction on FPR95 is 13.7%. We attribute these results to the fact that the prototypical graphs generated by PGR-MOOD can enlarge the judge score gap between ID and OOD which satisfies the requirement of optimal OOD detector. 5.3 Visualization of Score Gap Q: Whether PGR-MOOD can enlarge the judge score gap between ID and OOD graphs? Yes, we calculate the similarity between the prototypical graphs and test graphs, which has a massive difference for ID and OOD. A more significant gap between ID and OOD graphs corresponds to a better graph OOD detector. We present the scoring distributions on two datasets in Fig. 6. The ID and OOD are perfectly separated into two distinct distributions, so we can use a simple threshold for OOD detection and achieve SOTA performance. 5.4 Ablation Experiment Q: Whether each module in PGR-MOOD contribute to effectively discriminating OOD molecular graphs? Yes, we conduct experiments on four datasets to verify the role of LID, LOOD, and FGW modules in PRG-MGOD. The results are shown in Table 2. \u25b7Ablation on LID and LOOD. We remove LID and LOOD in the Lguide respectively to explore their impacts on the performance of OOD detection. We find that merely enlarging the distance between \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato and Tobin, et al. Table 3: OOD detection performance on the GOOD dataset. Scaffold and Size are the basis for dividing ID and OOD graphs. Best and runner-up results are highlighted with bold and underline, respectively. GOOD-HIV Dataset Metric MSP GOOD-D GraphDE AAGOD OCGIN GLocalKD GR-MOOD PGR-MOOD Improve Scaffold AUROC \u2191 58.55\u00b19.18 62.42\u00b11.89 65.66\u00b11.69 74.81\u00b11.56 66.29\u00b14.35 64.76\u00b10.34 61.22\u00b12.68 85.57\u00b11.32 +14.38% AUPR\u2191 58.34\u00b16.85 69.60\u00b12.03 60.94\u00b10.48 72.51\u00b11.99 65.45\u00b15.98 65.92\u00b10.64 60.53\u00b11.94 85.12\u00b10.71 +12.61% FPR95\u2193 93.40\u00b14.95 87.75\u00b10.35 88.40\u00b10.43 76.71\u00b11.82 85.65\u00b16.74 83.98\u00b10.89 87.35\u00b11.66 66.50\u00b12.01 -13.31% Size AUROC\u2191 54.96\u00b19.07 72.23\u00b11.54 66.72\u00b11.13 63.44\u00b11.92 65.04\u00b14.65 68.49\u00b11.22 69.67\u00b12.71 88.43\u00b12.37 +22.47% AUPR\u2191 54.09\u00b13.25 76.12\u00b11.26 65.55\u00b10.30 60.02\u00b11.88 64.67\u00b14.03 68.23\u00b10.97 71.76\u00b12.39 87.77\u00b12.18 +15.30% FPR95\u2193 97.80\u00b14.75 68.74\u00b13.25 72.20\u00b10.89 75.97\u00b11.15 73.64\u00b15.86 76.13\u00b11.55 60.56\u00b12.91 65.17\u00b12.21 -5.17% GOOD-PCBA Dataset Metric MSP GOOD-D GraphDE AAGOD OCGIN GLocalKD GR-MOOD PGR-MOOD Improve Scaffold AUROC \u2191 54.57\u00b19.07 85.69\u00b11.16 68.45\u00b11.23 79.06\u00b10.48 69.50\u00b13.17 70.90\u00b11.68 70.07\u00b10.60 86.57\u00b11.32 +1.02% AUPR \u2191 52.43\u00b16.21 86.97\u00b11.76 66.07\u00b10.32 72.70\u00b10.30 68.34\u00b14.11 73.56\u00b11.64 71.90\u00b10.64 88.12\u00b10.71 +1.32% FPR95 \u2193 90.76\u00b14.36 16.04\u00b11.90 82.34\u00b10.67 60.37\u00b10.58 87.94\u00b16.98 39.57\u00b11.44 55.42\u00b11.89 15.01\u00b10.32 -6.04% Size AUROC \u2191 58.57\u00b18.99 78.31\u00b11.19 66.24\u00b11.90 64.90\u00b11.71 70.61\u00b13.25 73.58\u00b10.50 71.49\u00b10.78 83.84\u00b11.53 +7.06% AUPR \u2191 57.23\u00b13.25 76.21\u00b11.61 64.58\u00b10.21 67.24\u00b10.87 72.21\u00b13.91 67.40\u00b10.91 75.31\u00b11.09 84.85\u00b12.03 +11.33% FPR95 \u2193 88.60\u00b14.75 27.30\u00b11.72 88.45\u00b10.29 60.03\u00b11.06 63.80\u00b14.47 60.29\u00b10.89 46.37\u00b11.29 17.01\u00b10.17 -37.61% 0.0 0.2 0.4 0.6 0.8 OOD judge score 0 5 10 15 20 Frequency in-distribution out-of-distribution (a) HIV-Scaffold 0.0 0.2 0.4 0.6 0.8 1.0 OOD judge score 0 20 40 60 Frequency in-distribution out-of-distribution (b) IC50-Scaffold 0.5 0.0 0.5 1.0 1.5 2.0 2.5 OOD judge score 0 1 2 3 4 5 Frequency in-distribution out-of-distribution (c) IC50-Size Figure 6: OOD judge score distributions on three datasets. 1 2 3 4 5 Step 4 2 0 2 4 Loss value GOOD-HIV-Scaffold L_ID L_OOD L_guide 1 2 3 4 5 Step 4 2 0 2 4 Loss value GOOD-HIV-Size L_ID L_OOD L_guide 1 2 3 4 5 Step 4 2 0 2 4 Loss value DrugOOD-IC50-Scaffold L_ID L_OOD L_guide Figure 7: Loss variation during generation on three datasets. the prototypical graph from OOD samples (w/o LID) or bringing it closer to ID samples (w/o LOOD) significantly undermines the performance of PGR-MOOD. This fully confirms that the Property\u2780 and Property\u2781are valid and correct. These results demonstrate that the composition of LID and LOOD can generate prototypical graphs \ud835\udc3awith different similarity measurement for ID and OOD graphs in the testing phase. \u25b7Ablation on FGW. We replace the sim(\u00b7) function based on FGW in Eq. (17) with Eq. (10) of GR-MOOD to explore its importance on the performance of OOD detection. We find that the FGW is even more influential than LOOD on all datasets with different metrics. These experimental results demonstrate that a proper similarity measurement is necessary and the FGW can thoroughly evaluate the similarity between two graphs by considering both their structure and features. Q: Whether the prototypical graphs \ud835\udc3agenerated by Lguideguided PGR-MOOD follow the Properties \u2780and \u2781? Yes, the prototypical graphs \ud835\udc3aeffectively reduce the distance with the ID graphs and significantly increase the separation from the OOD EC50 IC50 HIV PCBA Dataset 0 200 400 600 800 1000 Training time (s) PGR-MOOD GOOD-D GR-MOOD EC50 IC50 HIV PCBA Dataset 0 1 2 3 4 5 6 7 8 Testing time (s) PGR-MOOD GOOD-D GR-MOOD EC50 IC50 HIV PCBA Dataset 0 200 400 600 800 Memory allocation(MB) PGR-MOOD GOOD-D GR-MOOD Figure 8: Efficiency verification experiments on training time, testing time, and memory allocation. graphs. To validate the impact of Lguide, its trend is monitored throughout the generation phase, as depicted in Fig. 7. Here, LID and LOOD are computed using Eq. (12) and Eq. (13) and they represent the distance between \ud835\udc3aand all graphs belong to ID and OOD, respectively. As the generation progresses, LID steadily decreases towards 0, whereas LOOD escalates sharply. This observation aligns seamlessly with the foundational principles of PGR-MOOD. 5.5 Computational Complexity Comparison Q: Whether the PGR-MOOD reduces the complexity of time and space in the training and testing phases? Yes, to validate the efficiency and scalability of PGR-MOOD, we conduct comprehensive comparisons against the SOTA method GOOD-D and a baseline GR-MOOD. The comparative results are illustrated in Fig. 8. Although PGR-MOOD slightly trails GOOD-D in testing time, it markedly surpasses it in all other aspects. \u25b7Efficiency on execution time. During the training phase, PGRMOOD exhibits a substantially reduced training duration compared to both GOOD-D and GR-MOOD. This efficiency stems from GOODD\u2019s reliance on a time-consuming contrastive learning approach for model training, whereas GR-MOOD necessitates fine-tuning of the diffusion model on the training set. In contrast, PGR-MOOD requires the generation of only a limited set of prototype graphs, thereby enhancing its training efficiency. During the testing phase, GOOD-D leverages its trained model to directly classify input graphs, while PGR-MOOD\u2019s method, which entails calculating the similarity between input graphs and the set of prototypical graphs individually. Consequently, PGR-MOOD is marginally slower than \fOptimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY GOOD-D. However, it significantly outpaces GR-MOOD, which requires the regeneration of reconstructed graphs for each input. \u25b7Scalability in memory allocation. To assess the memory efficiency of our method, we evaluate memory allocation during the testing phase. PGR-MOOD, which eschews the need for any model for OOD detection, only loads the set of prototypical graphs and demands the least memory allocation. In contrast, the GOOD-D method requires loading GNNs, and GR-MOOD necessitates loading a diffusion model for reconstruction graphs, thereby increasing their memory requirements. The experimental findings underscore that our approachcan significantly mitigate memory consumption and enhance model scalability. 6 CONCLUSION This study explores OOD detection for molecular graphs, starting with a basic diffusion model-based approach, GR-MOOD, and identifying key challenges. We introduce PGR-MOOD, an advanced OOD detection method for molecular graphs that addresses GRMOOD\u2019s limitations by using a diffusion model to create prototypical graphs. These graphs closely resemble ID inputs while distinctly diverging from OOD inputs. PGR-MOOD utilizes the Fused Gromov-Wasserstein distance for efficient similarity measurement and OOD scoring, significantly reducing computational load. Our approach demonstrates SOTA results across ten datasets, proving its effectiveness." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.10573v2", |
| "title": "AAVDiff: Experimental Validation of Enhanced Viability and Diversity in Recombinant Adeno-Associated Virus (AAV) Capsids through Diffusion Generation", |
| "abstract": "Recombinant adeno-associated virus (rAAV) vectors have revolutionized gene\ntherapy, but their broad tropism and suboptimal transduction efficiency limit\ntheir clinical applications. To overcome these limitations, researchers have\nfocused on designing and screening capsid libraries to identify improved\nvectors. However, the large sequence space and limited resources present\nchallenges in identifying viable capsid variants. In this study, we propose an\nend-to-end diffusion model to generate capsid sequences with enhanced\nviability. Using publicly available AAV2 data, we generated 38,000 diverse AAV2\nviral protein (VP) sequences, and evaluated 8,000 for viral selection. The\nresults attested the superiority of our model compared to traditional methods.\nAdditionally, in the absence of AAV9 capsid data, apart from one wild-type\nsequence, we used the same model to directly generate a number of viable\nsequences with up to 9 mutations. we transferred the remaining 30,000 samples\nto the AAV9 domain. Furthermore, we conducted mutagenesis on AAV9 VP\nhypervariable regions VI and V, contributing to the continuous improvement of\nthe AAV9 VP sequence. This research represents a significant advancement in the\ndesign and functional validation of rAAV vectors, offering innovative solutions\nto enhance specificity and transduction efficiency in gene therapy\napplications.", |
| "authors": "Lijun Liu, Jiali Yang, Jianfei Song, Xinglin Yang, Lele Niu, Zeqi Cai, Hui Shi, Tingjun Hou, Chang-yu Hsieh, Weiran Shen, Yafeng Deng", |
| "published": "2024-04-16", |
| "updated": "2024-04-17", |
| "primary_cat": "cs.AI", |
| "cats": [ |
| "cs.AI", |
| "cs.CE", |
| "q-bio.BM" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "AAVDiff: Experimental Validation of Enhanced Viability and Diversity in Recombinant Adeno-Associated Virus (AAV) Capsids through Diffusion Generation", |
| "main_content": "Introduction Recombinant Adeno-associated virus vectors (rAAV) have emerged as crucial components in the field of gene therapy. Since 2017, there have been six new gene therapy products approved, and 1 arXiv:2404.10573v2 [cs.AI] 17 Apr 2024 \fover 2000 pipelines are registered, underscoring the significance of rAAV in clinical applications [1]. However, all six of the approved products employ capsid sequences that originate from wildtype viruses found in natural resources. Although these capsids derived from wild-type viruses exhibit broad tropism during treatment, rendering them non-specific in targeting pathogenic cells, their efficiency in transduction and gene expression within target cells is suboptimal. Consequently, they prove inadequate for the treatment of diseases primarily affecting specific tissues such as the central nervous system, muscle, and heart. Thus, there exists a consensus within the scientific community to advance the development of enhanced vectors with improved specificity and transduction efficiency. Several methods have been established to design and evaluate novel capsids, and one promising approach is the design and screening of capsid libraries. This approach involves the creation of a pool of capsid-encoding DNA, which can be designed either rationally or randomly [2][3][4][5][6]. These DNA sequences are integrated into VP expression cassettes to facilitate vector production. The plasmid-to-cell ratio is meticulously fine-tuned during vector manufacturing to promote the production of a specific capsid variant, which encapsulates its own genome [7],[8]. Subsequently, this pool of vectors is generated and injected into a selection model as a mixture. The resulting DNA signal delivered to the target cells is then retrieved and sequenced, representing the capsid variants that effectively transduce the target cells. A common approach in library design involves either the insertion of a randomized peptide or the random mutation of specific amino acids within a tolerant domain [4]. Notably, a prominent variant that has emerged through this library screening approach is PHP.B [9]. However, its ability to cross the blood-brain barrier (BBB) is not maintained during the translation of studies from mice to humans since the receptor for the PHP.B variant in brain microvascular endothelial cells is specific to particular mouse strains [10]. Several studies utilize a similar strategy by inserting 7 random amino acids within the capsid hypervariable VIII region; however, a clear frontrunner for clinical use has not yet emerged. Although the AAV VP coding region consists of approximately 720 amino acids, the insertion of 7 amino acids represents only a small fraction. Mutations in larger regions show promise in addressing diverse requirements. Nevertheless, even with 7 amino acid random mutations, the number of variants in the library can impose limitations on bacterial transformation, clone numbers, and vector manufacturing. Furthermore, the number of dosing iterations in a selection model becomes constrained. Therefore, it is crucial to explore broader mutational landscapes while ensuring the library sizes remain manageable. Not all sequences resulting from capsid mutation can effectively express protein, assemble into a particle, and efficiently encapsulate their genome like the wild-type sequence. As the number of mutations in a VP increases, the sequence search space expands exponentially, making it impossible to filter through experimental means, resulting in a decreased likelihood of successful capsid packaging. The development of algorithms that establish a correlation between capsid DNA sequences and packaging efficiency is of utmost importance [11]. Furthermore, low yield resulting from unfavorable physical and chemical properties can impede the clinical and commercialization potential. Attaining efficient and targeted transduction of specific cells poses a significant challenge in capsid engineering. In order to overcome these challenges, researchers have utilized generative algorithms to design and predict the viability of viral vectors, specifically focusing on vector fitness. The most recent approach [12] entails training a binary classifier using a substantial amount of capsid data to ascertain the viability of a given sequence. Subsequently, random sampling is conducted within a mutation subspace that has been randomly partitioned. Samples that are classified as viable by the binary classifier are retained, while non-viable samples are discarded. This iterative filtering process is used to select a collection of capsid sequences with potentially viable properties. The constructed capsid sequence collection using this method has a 2 \fhigher proportion of viable compared to the collection constructed by random mutation. Nevertheless, the ratio of viable sequences is heavily influenced by the performance of the trained binary classifier. Moreover, due to the vast number of possible combinations resulting from sequence mutations (excluding insertions), the combinatorial count reaches 2seqlen, where seqlen represents the sequence length. This renders it impractical to complete the filtering process within a reasonable timeframe given the extensive range of choices. Consequently, during the implementation phase, it is imperative to randomly partition a subspace from this dataset and subsequently conduct the filtering process. Considering that the proportion of genuinely viable sequence samples in the overall sequence space is exceedingly low, there is a high probability of overlooking potential sequences when partitioning the subspace. Therefore, to address this issue, we combined the classification and filtering stages by introducing an end-to-end, diffusedbased generative model that can effectively generate a higher proportion of viable sequences. Moreover, this model functions as a generative model that generates sequences by following the gradient direction to identify viable samples during the generation process. Consequently, this generation method enables the sampling of a greater number of potential viable samples within the designated timeframe.In this study, we employed the model trained using publicly available data on AAV2 to generate a collection of 38,000 highly diverse AAV2 VP sequences. Out of these, 8,000 sequences were randomly chosen and subjected to evaluation for their viral selection values through DNase-resistant capsid assembly testing, which revealed a significant improvement in performance compared to traditional methods [12]. Moreover, the availability of viable data generated from mutations on wild-type capsids of various serotypes is severely limited, and the synthesis process is both time-consuming and expensive. Therefore, the direct generation of data with multiple mutation sites while preserving capsid viability during the mutation process on new serotypes would greatly expedite research in the field of capsids. Building upon this, we transferred the remaining 30,000 samples from the initial 38,000 AAV2-generated sequences to the corresponding domain of AAV9. These sequences will be synthesized into a vector library to assess their actual survival rate. Encouragingly, we observed positive results in terms of yield, and when the number of mutation sites reached 9, we achieved a relatively high proportion of viable samples. In conclusion, the advancement of rAAV vectors with improved specificity, transduction efficiency, and delivery mechanisms presents tremendous potential for gene therapy research. The design and screening of capsid libraries, complemented by generative algorithms, offer a dynamic approach to overcome the limitations of wild-type capsids, bringing us closer to the development of highly efficient and targeted gene therapy vectors. This study represents a significant progression in the field of viral vector design and functional validation, providing innovative solutions to the challenges encountered in gene therapy. Experiments Experiment 1 In order to verify the ability of the diffusion model in AAV capsid sequence design, we performed the following experiments: \u2022 1. Experiment on AAV2 HVR VIII The diffusion model was trained using the data provided in the references[12],[13]. After deduplicating the generated sequences and removing samples that overlapped with the training set, a collection of approximately 38,000 samples remained. Out of these, 8,000 samples were randomly selected for biological activity testing specifically targeting AAV2. \u2022 2. Experiment on Region VIII on AAV9 Proceeding with the remaining 30,000 samples from the previously generated sequences, activity experiments were conducted targeting 3 \fAAV9. The specific methodology involved the direct replacement of the sequence fragment corresponding to region VIII of AAV9 with the generated sequence, followed by subsequent biological activity experiments. Experiment 2 In order to explore the mutation fitness of multiple hypervariable regions on AAV9 serotypes, saturated single mutants were constructed on regions IV, V, and VIII. Results Sequence generated by diffusion model To evaluate the reliability of the sequences generated by the diffusion model, we analyze them from two perspectives. The first perspective involves observing the relationship between the generated sequences and the training set. The second perspective entails experimentally validating the viability of the generated sequences. The relationship between the generated sequences and the training set : The overlap between the feature space of the generated sequences and the training set can be observed from Fig. 1a. Fig. 1bdemonstrates the close match between the length distribution of the sequences generated by the generation model and the length distribution of the training set. Furthermore, the model generates sequences with shorter lengths, including those absent in the training set, such as at positions where the sequence length is 27. The distribution of the number of mutated positions generated by the model, as shown in Fig. 1c, broadly covers all the numbers of mutated positions in the training set. In previous approaches to designing AAV capsid sequences, the design space was limited by the insertion or replacement of one amino acid between adjacent residues. However, the diffusion model does not impose such restrictions and allows for the insertion of one or more amino acids at specific positions. Fig. 1d indicates that the sequences generated by the model exhibit a higher frequency of continuous insertions based on the WT sequence when compared to the training set. This can be attributed to our data augmentation approach, which incorporates continuous deletions and insertions between mutation positions. The method proposed by Dyno Therapeutics [12] for designing highly active sequences involves selecting a seed sequence at a distance of k from the WT sequence. Subsequently, a single mutation is applied to the seed sequence to generate sequences that are at a distance of k+1 from the WT sequence. However, this greedy design approach restricts the diversity of the final sequences. Thus, Fig. 1e illustrates the disparities in the number of clusters between the diffusion model and the CNN model [12] at varying clustering radii. Greater sequence diversity is indicated by a higher number of clusters. The sequences designed by the diffusion model demonstrate slightly higher diversity when compared to those designed by the CNN model. Performance of the generated sequences in terms of viability :For the biological viability experiments on the AAV2 region VIII, a random selection of 8000 samples was chosen from the generated sequences. Fig. 1f illustrates the proportion of viable samples among the sequences generated by the diffusion model, considering different numbers of mutations. The proportion of viable samples exceeds 90% when the number of mutations ranges from 7 to 20, as evident from the observations. Additional detailed information can be found Table S1. The proportion of viable samples is approximately 80% for mutation numbers ranging from 4 to 6. These results clearly demonstrate the robust capability of our model to generate viable sequences. 4 \fFigure 1: a: Distribution of features of sequences generated by the diffusion model compared to the training set. The left graph represents the feature distribution after dimensionality reduction using t-SNE, while the right graph represents the feature distribution after dimensionality reduction using PCA. Class 1 represents the generated sequences, while classes 2 and 3 represent sequences from the training set. b: Distribution of sequence lengths for sequences generated by the diffusion model compared to the training set. The x-axis represents the length of the sequences, and the y-axis represents the frequency. The green color represents the generated sequences, while the other two colors represent sequences from the training set. c: Distribution of the number of mutation sites for sequences generated by the diffusion model compared to the training set. The x-axis represents the number of mutation sites, and the y-axis represents the frequency. The green color represents the generated sequences, while the other two colors represent sequences from the training set. d: Distribution of different lengths of consecutive insertions generated by the diffusion model. The x-axis represents the length of consecutive insertions, and the y-axis represents the proportion of samples. e: Distribution of the number of clusters for sequences generated by the diffusion model and the CNN model. The x-axis represents the clustering radius (sequences with a difference in mutation count within this radius are considered in the same cluster), and the y-axis represents the number of clusters. f: Proportion of viable samples for sequences generated by the diffusion model at different numbers of mutation sites. The x-axis represents different numbers of mutation sites, and the y-axis represents the proportion of viable samples. Performance of the diffusion model-generated sequences when transferred to the AAV9 serotype: Previously, the only available approach for sequence design based on a specific serotype of AAV capsid, where no known viable mutant sequences exist, was random mutation design. However, 5 \fprevious findings on AAV2 [13] have revealed that the proportion of viable samples decreases significantly, reaching nearly zero, when the number of mutations exceeds five. Due to the high similarity in capsid sequence between AAV9 and AAV2, we aimed to test the effectiveness of transferring sequences generated by a model trained on AAV2 data to the wild-type AAV9 at corresponding positions. The experimental results shown in Fig. 2a indicate a significant increase in the number of mutations when the sequences generated from the corresponding region of AAV2 were transferred to the corresponding region of AAV9, as compared to the wild-type AAV9 sequence. Fig. 2b demonstrates that the proportion of generated sequences that remained viable in AAV9 was approximately 50% for mutation numbers ranging from 9 to 10. Conversely, when employing random mutation methods (as referenced from dyno data) for sequence mutation in AAV9 without any viability labeling data, the proportion of viable samples was close to zero at a mutation number of 9. Additional detailed information regarding the viability proportions can be found in Table S2. These findings indicate the potential utilization of existing data from other serotypes to create diverse models for sequence design in future serotype capsid design, instead of solely relying on random mutation approaches. Figure 2: a: Proportion of samples generated by the diffusion model at different numbers of mutation sites, where blue represents AAV2 and red represents AAV9. b: Proportion of viable samples for sequences generated by the diffusion model at different numbers of mutation sites. The x-axis represents different numbers of mutation sites, and the y-axis represents the proportion of viable samples. Analyzing Mutations in AAV9 Hypervariable Regions. Apart from investigating hypervariable region VIII (HVR VIII), our study focused on exploring the extensively studied regions of HVR IV and HVR V within the AAV9 capsid. These regions have been recognized for their ability to tolerate mutations, and our objective was to gain a comprehensive understanding of their mutational landscape. In particular, our focus was on amino acid residues 448-476, 488-517, and 562-590, where we performed single amino acid mutations within these regions. Furthermore, we introduced random amino acid insertions between adjacent residues. To evaluate the viability of these mutations, we calculated activity values by comparing the frequency of the vector to the frequency of the plasmid, as depicted in Fig. 3a. The red line on the graph signifies the activity value of the wild-type sequence. The analysis indicated that the majority of peak reads were concentrated between 0 and 0.5, implying that sequences within this range were either inviable or demonstrated reduced viability. The enriched capsid sequences exhibited a distribution that resembled a Gaussian curve. Upon comparing the activity levels of HVR IV and HVR V with those of HVR VIII, it became apparent that HVR VIII demonstrated both a higher activity peak value and a broader 6 \frange (1-6 compared to 1-3). The frequency comparisons in Fig. 3b revealed that while HVR V variants exhibited reads in 80% of cases, HVR IV and HVR VIII variants had reads in only 60% of cases. However, HVR IV and HVR VIII variants exhibited a greater number of variants with significantly higher reads compared to HVR V mutants, with HVR VIII mutants demonstrating the highest read counts. The wild-type sequences were indicated as red dots. Consequently, HVR V encompassed a broader range of variability, while HVR IV and HVR VIII variants exhibited the highest read counts. In Fig. 3c, we presented a comprehensive breakdown of fitness scores for the insertion, deletion, and mutation of each amino acid within the selected HVR regions. These scores were calculated based on the logarithm base 2 of the vector frequency divided by the plasmid frequency. The score ranges for HVR VIII, HVR IV, and HVR V mutants were approximately [-5 to 4], [-8 to 2.5], and [-6.5 to 2], respectively. These scores substantiated that HVR VIII comprised a subset of variants with superior fitness scores. Importantly, we identified the regions of mutation and insertion tolerance, specifically spanning amino acid residues 588-591, 448-462, and 488-508. Intriguingly, HVR V demonstrated the most extensive tolerant region, indicating the need for further investigation into more substantial mutations within this region. Fig. 3d illustrated the vector and plasmid frequency at each variant level, unveiling a distinct clustering of variant populations into two clusters, with minimal neutral mutations. Discussion In this study, we employed the diffusion model to generate sequences within region VIII of AAV2 and conducted activity experiments on AAV2 capsids. The results revealed that the generated sequences displayed a viability proportion exceeding 90% within the range of 7 to 20 mutations. This finding highlights the robust capability of our model in generating viable sequences. Additionally, we utilized the diffusion model to generate sequences within region VIII of AAV2 and performed activity experiments on AAV9 capsids. The results unveiled that the generated sequences displayed a viability proportion of approximately 50% when the number of mutations ranged from 9 to 10. This proportion was notably higher than the viability proportion obtained through random mutation-based sequence design in the absence of viable sequences. Traditionally, the experimental process for designing capsids with a higher number of mutation sites involved initial experiments utilizing single-site mutagenesis, followed by rational and random mutagenesis based on the obtained results. This iterative process aimed to generate additional experimental data, ultimately leading to the discovery of a broader range of capsid sequences. Based on the results presented in this paper, our model can be utilized for the design of capsids for different AAV serotypes, obviating the need for random mutation or exhaustive single-site mutagenesis. This expedites the experimental process for AAV capsid design. However, our model has certain limitations. One notable limitation is that the range of mutation counts for the generated sequences is constrained by the range observed in the training set. o overcome this limitation, future improvements can involve pre-training the model on an expanded dataset comprising not only AAV capsid sequences but also sequences from other viruses and even non-viral protein sequences. By enabling the model to generate high-quality protein sequences, we can subsequently perform astute fine-tuning on the existing viable AAV samples, liberating the model from the constraints of the AAV training set and unleashing the capabilities acquired through pre-training. In Fig. 3c, the heat map suggests that the mutant-tolerant region may extend beyond the range of our tests. Expanding the scope of saturated mutagenesis could provide further valuable 7 \fFigure 3: a: Distribution of activity values for single-mutant sequences. The x-axis represents the activity values of the sequences, and the y-axis represents the frequency of sequences within that activity value range. b: Trend of activity values for single-mutant sequences. The xaxis represents each sequence, and the y-axis represents the normalized activity values. c: Enrichment levels of single-mutant sequences at different mutation positions. From top to bottom are the enrichment levels of sequences with single mutations in regions VIII, IV, and V of AAV9. The x-axis represents the mutation positions in the current region. If the mutation position is a float value, it indicates an insertion between two integer positions. The y-axis represents the amino acid type after the mutation, where \u201d-\u201d represents the deletion of the amino acid at the current position. Black dots represent the positions of wild-type sequences in that region. d: Enrichment levels of single-mutant sequences. From top to bottom are the enrichment levels of sequences with single mutations in regions VIII, IV, and V of AAV9. The x-axis represents the frequency of plasmids after sequencing, and the y-axis represents the frequency of viruses after sequencing. insights. Notably, distinct patterns emerged within each HVR domain. In the aforementioned 8 \ftolerance regions, amino acids K, R, and C were found to be unfavorable in HVR VIII, likely due to their large size and potential disruption of the capsid structure. In HVR IV (residues 457-475), direct mutations were better tolerated than insertions, highlighting the importance of residue length and structural rigidity in this region. While we gathered data on multiple mutations for HVR VIII, obtaining similar data for HVR IV and V regions would be beneficial. Additionally, collecting data on double or multiple mutation/insertion/deletion scenarios could unveil synergistic effects on fitness and introduce new factors that impact vector transduction. This study marks a significant advancement in capsid engineering, highlighting the correlation between VP sequence mutants and capsid assembly features through an innovative algorithm. By enabling the investigation of transduction efficacy and specificity predictions, our research offers valuable insights into the field of capsid design, where the transduction function is intricately linked to the structure of the vector capsid, which is determined by the VP sequence. Consequently, it is crucial to establish a robust selection model for data collection and algorithm development to further explore these captivating areas of study. Methods The process of sequence generation using the diffusion model The task entails generating sequences within the mutation region of the AAV2 capsid, specifically targeting region VIII. Initially, we compiled mutation sequences from dyno in this region, forming the training set for our model [12]. The dataset comprised a total of 140,000 data entries, encompassing a range of mutation site numbers from 1 to 28. The capsid sequence consists of multiple amino acids, with each amino acid regarded as a token. Hence, we utilize a discrete diffusion generation model for sequence generation. As depicted in Fig. 4, it presents the implementation diagram of the generation diffusion model [14]. The model consists of two processes: diffusion and denoising. The denoising process can be perceived as a prediction process. Utilizing a maximum length noise sequence, the denoising model trained during the diffusion process progressively eliminates noise from the input sequence. After T steps, the noise sequence is restored to a valid sequence. In the case of a valid sequence, noise is incrementally introduced step by step, producing sequences x1, x2, . . ., ultimately resulting in a fully noisy sequence xT. The purpose of the diffusion process is to aid the neural network in acquiring knowledge of the denoising process. Throughout this process, we possess the actual sequences along with the outcomes of adding noise to them. Consequently, this enables the network to learn a mapping that can restore the original sequence from the noisy counterpart. The complete implementation process is subdivided into four stages: data augmentation, noise addition, model training, and denoising. For more detailed information, please refer to the supplementary materials. Once the model is trained, when presented with a fixed-length noise sequence, it progressively recovers the noise sequence to a meaningful and valid capsid sequence. Generation of AAV Capsid Libraries In the development of AAV capsid libraries, wild-type cap genes were subject to modification through the incorporation of DNA oligonucleotides, the sequences of which are provided in Supplementary Table [Insert Table Number]. The specific synthesis of 84-mer to 108-mer DNA oligonucleotides, encoding peptides of interest, was performed by Twist Bioscience in a chip-based primer pool. Subsequently, these DNA oligonucleotides underwent amplification through PCR, utilizing a high-fidelity DNA polymerase (NEB). The resulting PCR fragments were then ligated into the AAV backbone plasmid. The ligation products were transformed 9 \finto electrocompetent cells (Lucigen) to enhance transformation efficiency. The capsid library plasmids were ultimately prepared using a QIAGEN kit, and the diversity of the capsid library was characterized through next-generation sequencing analysis. AAV Production Assay and Virus Titer Detection To produce viral particles, the plasmid libraries were transfected into 293TN cells. These cells were maintained in a sterile environment within a 5% CO2 incubator at 37\u25e6C. Typically, 293TN cells were cultured in High-Glucose Dulbecco\u2019s Modified Eagle\u2019s Medium (DMEM; Gibco) supplemented with 10% fetal bovine serum (FBS; Gibco) and 1% penicillin/streptomycin (Thermo Fisher). AAV library vectors were produced by transfecting 293TN cells, along with adenovirus helper and AAV Rep-\u25b3Cap plasmids, using FectoVIR (Polyplus). For the transfection process, 293TN cells were seeded in 10cm dishes at a density of 7.2\u00d7106 cells per dish. Following a 72-hour duration, the virus was harvested and subsequently purified utilizing an iodixanol density gradient ultracentrifugation method. The AAV titers were quantified using Taqman-based qPCR. Next-Generation Sequencing The remaining cap gene sequences in the purified pool represent viable mutants suitable for both capsid assembly and genome packaging. To assess this, the purified capsids were subjected to heat denaturation at 98 \u00b0C for 10 minutes. Subsequently, the mutant region of the cap gene was amplified using High Fidelity 2x master mix (NEB) with PCR primer sequences. Illumina sequencing adapters and indices were integrated in a subsequent PCR step. These PCR amplicons were subsequently subjected to sequencing with overlapping paired-end reads employing Illumina NextSeq. Figure 4: The directed graphical model considered in this work" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.08949v1", |
| "title": "Multimodal Cross-Document Event Coreference Resolution Using Linear Semantic Transfer and Mixed-Modality Ensembles", |
| "abstract": "Event coreference resolution (ECR) is the task of determining whether\ndistinct mentions of events within a multi-document corpus are actually linked\nto the same underlying occurrence. Images of the events can help facilitate\nresolution when language is ambiguous. Here, we propose a multimodal\ncross-document event coreference resolution method that integrates visual and\ntextual cues with a simple linear map between vision and language models. As\nexisting ECR benchmark datasets rarely provide images for all event mentions,\nwe augment the popular ECB+ dataset with event-centric images scraped from the\ninternet and generated using image diffusion models. We establish three methods\nthat incorporate images and text for coreference: 1) a standard fused model\nwith finetuning, 2) a novel linear mapping method without finetuning and 3) an\nensembling approach based on splitting mention pairs by semantic and\ndiscourse-level difficulty. We evaluate on 2 datasets: the augmented ECB+, and\nAIDA Phase 1. Our ensemble systems using cross-modal linear mapping establish\nan upper limit (91.9 CoNLL F1) on ECB+ ECR performance given the preprocessing\nassumptions used, and establish a novel baseline on AIDA Phase 1. Our results\ndemonstrate the utility of multimodal information in ECR for certain\nchallenging coreference problems, and highlight a need for more multimodal\nresources in the coreference resolution space.", |
| "authors": "Abhijnan Nath, Huma Jamil, Shafiuddin Rehan Ahmed, George Baker, Rahul Ghosh, James H. Martin, Nathaniel Blanchard, Nikhil Krishnaswamy", |
| "published": "2024-04-13", |
| "updated": "2024-04-13", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "Multimodal Cross-Document Event Coreference Resolution Using Linear Semantic Transfer and Mixed-Modality Ensembles", |
| "main_content": "Introduction Imagine two newspaper articles about the same event. The articles come from different sources with radically different perspectives and report the event with very different language. They use different action verbs, include ambiguous pronominal references, describe causes differently, and even attribute different intentionality to the event\u2014for example, \u201cBuzina, 45, was shot dead\u201d vs. \u201cHe was murdered\u201d. An automated system may be unable to identify from the text alone that the two events described are actually the same. This is the problem of cross-document coreference resolution (CDCR) of events: inferring that two event mentions in different documents actually refer to the same thing. Now imagine that each of the articles is accompanied by an image. While not identical, they clearly contain the same people, entities, and actions. This would be strong evidence to a reader that the two events described in the different articles are in fact the same. Purely text-based approaches to CDCR, while built on sophisticated Transformer-based language models (LMs) (Vaswani et al., 2017; Beltagy et al., 2020), are blind to such potentially useful multimodal information. This problem is exacerbated by the relative dearth of multimodal information included in event CDCR corpora. *This work conducted at Colorado State University. In this work, we propose a novel multimodal event CDCR method. Where current state-of-theart coreference approaches that consider visual information demonstrate the utility of a multimodal approach, they do so at a high computational cost (Guo et al., 2022). Furthermore, they typically focus on linking objects rather than events. We address the sparsity of multimodal data in benchmark datasets by retrieving images associated with the metadata of event mentions, and generating eventcentric images with state-of-the-art image diffusion models. We perform coreference experiments in a fully multimodal setting and rigorously test the contribution of multimodal information to CDCR.1 In total, our novel contributions include: \u2022 A novel approach to multimodal cross document event coreference (MM-CDCR) including a low-compute, bidirectional linear semantic transfer technique (Lin-Sem) based on semantic equivalence across modalities; \u2022 A model ensemble hybrid approach that applies text-only or multimodal methods to different categories of mention pairs based on their semantic and discourse-level difficulty; \u2022 A novel method for enriching text-only coreference datasets (e.g., ECB+ (Cybulska and Vossen, 2014)) with event-centric images using generative image diffusion; 1Our code can be accessed at https://github. com/csu-signal/multimodal-coreference. arXiv:2404.08949v1 [cs.CL] 13 Apr 2024 \f\u2022 A new benchmark result on the AIDA Phase 1 dataset (Tracey et al., 2022), an explicitly multimodal event CDCR dataset. To our knowledge, this is the first evaluation performed over this dataset. 2. Related Work Cross-Document Event Coreference Resolution Most previous works on CDCR have been limited to text-only (Eisenstein and Davis, 2006; Chen et al., 2011). Early works (e.g., Humphreys et al. (1997); Bagga and Baldwin (1999); Chen and Ji (2009)) used supervised training over features like part-ofspeech tags, phrasal-matching, or aligned arguments. While Kenyon-Dean et al. (2018) enhanced lexical features with \u201cstatic\u201d embeddings like contextual word2vec (Mikolov et al., 2013), most recent works (Yu et al., 2022; Caciularu et al., 2021; Yadav et al., 2021; Nath et al., 2023) uses latent representations from Transformer-based encoders to compute pairwise mention scores of possible antecedents. Works such as Held et al. (2021) and Ahmed et al. (2023) overcome the quadratic complexity of the mention pair architecture by pruning negative pairs using discourse-coherence and lexical similarity (synonymous lemma pairs) respectively. We use Ahmed et al. (2023)\u2019s \u201coracle\u201d assumption for our pruning procedure. Multimodal Frameworks Most previous works in multimodal vision-language processing (e.g., (Le et al., 2019; Tan and Bansal, 2019)) have been compute-intensive, using separate encoders for visual and linguistic inputs, and auxiliary encoders for cross-modal or query-related modeling. Highperforming but high-compute models like ViLBERT (Lu et al., 2019) concatenate embeddings from different modalities before fine-tuning. Works such as Li et al. (2020), Tong et al. (2020), and Chen et al. (2021) leverage a common representation space for coreference-adjacent tasks like event extraction and detection in images and videos, but emphasize finding relations within a document or a topic. Works specific to multi-modal entity coreference resolution such as Guo et al. (2022) treat it largely as a grounding problem, using graph networks to link references in dialogue to items in a scene before feeding representations into BERT-style encoders to resolve scene-based visual-linguistic coreference chains. Our work is multimodal, cross-document, and event focused, and performs faster with the aid of linear mappings. Linear Projection Across Neural Networks Previous research within computer vision has explored using affine (McNeely-White et al., 2020, 2022; Jamil et al., 2023) as well as non-linear (Lenc and Vedaldi, 2015) transformations to explore equivalence of unimodal function approximators like CNNs. They show that two distinct, highly non-linear neural networks can learn similar properties transferable up to a linear projection while retaining near-equivalent performance on tasks like image classification or facial recognition. Similar techniques using affine mappings were reported in Merullo et al. (2023), who explore the equivalence of such approximators across modalities while also casting new light on high-fidelity transfer of nonlinguistic features into a generative LLM via unidirectional linear projections from image spaces. Nath et al. (2022) demonstrated linear mappings also preserve information across language models. Ghaffari and Krishnaswamy (2023) showed the same between language models and neural networks trained over tabular data. We use a low-compute, cross-modal, bidirectional linear-mapping technique (Lin-Sem: Linear Semantic Transfer) between language and vision Transformers, on the challenging event coreference task. We demonstrate where this linear transfer is providing useful information toward coreference resolution compared to a text-only discriminative LLM, or fused modality models following standard fine-tuning. 3. Methodology Fig. 1 illustrates the pipeline for our methodology, the components of which are detailed as follows. Semantic Equivalence V \u0000x,y,\u03d5(x,y) \u0001 :Rn\u00d7w\u00d7h\u00d73\u2192Rn\u00d7H (1) LLM \u0000x,y,\u03d5(x,y) \u0001 :Rn\u00d7m\u2192Rn\u00d7H (2) Let (1) and (2) represent the heterogeneous image and text representations for vision and text Transformer models respectively. (x, y) \u2208\u03c7 represents all the pairs of samples in sample space \u03c7, \u03d5(x, y) represents the concatenation of the image or text pair in their respective modalities, n and H represent the total sample pairs and hidden dimensions respectively, and m is the LLM\u2019s max token-length. We define cross-modal semantic equivalence as follows: two representations V and LLM in distinct modalities are semantically equivalent if there exists a bidirectional map MV\u2194LLM s.t.: \u2200x,y\u2208\u03c7:V \u0000x,y,\u03d5(x,y) \u0001 \u2248MLLM\u2192VLLM \u0000x,y,\u03d5(x,y) \u0001 (3) \u2200x,y\u2208\u03c7:LLM\u0000x,y,\u03d5(x,y)\u0001 \u2248MV\u2192LLMV \u0000x,y,\u03d5(x,y)\u0001 (4) while assuming both V and LLM to be bijective or invertible, so, MLLM\u2192V=V \u0000x,y,\u03d5(x,y) \u0001 \u25e6LLM \u0000x,y,\u03d5(x,y) \u0001-1 (5) MV\u2192LLM=LLM \u0000x,y,\u03d5(x,y) \u0001 \u25e6V \u0000x,y,\u03d5(x,y) \u0001-1 (6) \fArg1 Paired Arg2 Arg1*Arg2 Ridge\u00a0 Regressor \u00a0Bidirectional\u00a0 Mapping\u00a0 Bridge Matrix ...a verdict that means he could be \u00a0<m> executed </m>\u00a0 if these same jurors vote... \u00a0...found guilty\u00a0...while the <m> death recommendation </m>... \u00a0 Text Corpus\u00a0 Arg1 Paired Arg2 Arg1*Arg2 Stable Diffusion\u00a0 Vision Modality Encoders Text Encoder Training Phase Event Mention 1 (text) Event Mention 2 (text) A strong <m> 6.1-magnitude earthquake </m> struck the northwestern Indonesian\u00a0province.. ...injured in the <m> earthquakes </m> , which rekindled bitter memories of similar deadly quakes... Event Mention 1 Event Mention 2 Bidirectional Mapped Representations\u00a0 Generated Images Lin-Sem coeffcients Pairwise Scorer (MLP) Vision Modality Encoders Text Encoder Text Encoder Coreference Decision\u00a0 Mixed-Modality Ensembles Testing Phase Real Images Figure 1: Our approach for Multimodal CDCR using Lin-Sem. Linear Mapping (Lin-Sem) procedure between the distinct text and image embedding spaces for an event pair in the ECB+ corpus. Arg1 and Arg2 refer to the individual images in the pair and the trigger events (in yellow) surrounded by the <m> and </m> special tokens embedded in the text-encoder (LLM). Since a closed-form solution to analytically derive the mapping function MV\u2194LLM is not always feasible and since many task-based fine-tuning heads over a Transformer-based LLM involve fitting a linear classification layer, we propose a parameterefficient linear-mapping technique Lin-Sem. We estimate the mapping function within a empirical risk minimization framework by using a ridge regression between the two cross-modal representations. Mathematically, MLLM\u2192V \u2190minimize ((V\u2212\u03b2LLM)T (V\u2212\u03b2LLM)+\u03bb\u03b2T \u03b2) (7) MV\u2192LLM \u2190minimize ((LLM\u2212\u03b2V)T (LLM\u2212\u03b2V)+\u03bb\u03b2T \u03b2) (8) We assume \u03bb=1 while \u03b2 represents the L2-norm regularization parameter. Datasets We evaluated our methods on the ECB+ (Cybulska and Vossen, 2014) and the AIDA Phase 1 (Tracey et al., 2022) datasets. While the former is a popular, English-only CDCR benchmark containing a diverse range of news articles, the latter contains multimodal resources specific to Russia-Ukraine relations, in English, Russian, and Ukrainian. We focus only on the English documents.2 For our experiments, we used training and 2The AIDA Phase 1 dataset was created for the DARPA Active Interpretation of Disparate Alternatives (AIDA) program and is available from the Linguistic Data ECB+ AIDA Phase 1 Split Train Dev Test Practice Eval Docs 594 196 206 63 69 Event Mentions 3808 1245 1780 603 846 Clusters 1464 409 805 186 270 Singletons 1053 280 623 132 197 Images 3808\u2217 1245\u2217 1780\u2217 417 662 Table 1: ECB+ and AIDA corpus-level statistics. Tracey et al. (2022) refers to the provided train and test sets as \u201cpractice\u201d and \u201ceval\u201d, respectively. \u2217Including images generated using Stable Diffusion. evaluation splits following Cybulska and Vossen (2015) for ECB+ and Tracey et al. (2022) for AIDA Phase 1. Table 1 shows corpus-level statistics for these two datasets. Augmenting ECB+ with Images Since ECB+ does not provide images in their metadata, we scraped through the links provided in the documents and searched the Internet Archive for archived versions of articles with dead links. For original ECB documents without links, we manually search for keywords to retrieve articles. Out of 502 ECB+ document links, 43% were broken, but 50% could be recovered using web.archive.org. Of Consortium (catalog number LDC2019E77). It is the only published ECR benchmark that contains multimodal resources specific to cross-document coreference. Events here are specifically in the domain of Russia-Ukraine relations and annotated based on both saliency and the potential for conflicting perspectives. \f480 ECB documents, 51% were located via Google search. We retrieved a total of 543 images; 235 of 982 documents had at least one associated image. In addition to the overall lack of images, the retrieved document-level images may be poor representatives of individual event mentions, leading to the sparsity problem mentioned in Sec. 1. Therefore, we used Stable Diffusion (Rombach et al., 2022) to generate more relevant images and provide enough data to explore the contribution of multimodal information to ECR. Photo-realistic images were generated using sentences from ECB+ as prompts. Since a sentence can refer to multiple events, we provided an additional signal in the prompt by marking the event trigger with special tokens (<m> and </m>). Image Encoding To encode all images as vector representations, we used three variations of Vision Transformers (ViT; Dosovitskiy et al. (2021), BEiT; Bao et al. (2021), and SWIN; Liu et al. (2021)), as well as CLIP (Radford et al., 2021). Resulting representations were the pooled output of the firsttoken representations from the last encoder layer for the image sequence, akin to the [CLS] token in BERT variants. Encoding the images through distinct embedding spaces decoupled them from the original language inputs. Linear Projection Technique To project image and text representations across modalities, we first created a concatenated 3,072D (768\u00d74) representation for an image/text pair. These concatenated representations contained the paired representation, the individual mention representations (Arg1 and Arg2), and their element-wise product (in that order). Separate concatenated representations were constructed for each modality (see Fig. 1).3 We then used a ridge regressor to calculate the linear coefficients by minimizing the squared distances between concatenated representations from each modality for the training set. This gave us two square (3,072\u00d73,072) \u201cbridge\u201d matrices: MLLM\u2192V and MV\u2192LLM. We hypothesized that this bidirectional map retains crucial semantic information that a structure-preserving linear map would transfer between the two modalities. At evaluation, we matrix-multiplied the test concatenated representations with these matrices while maintaining the directionality of the linear map. These mapped representations were fed into a pairwise-scorer to get coreference clusters (see Fig. 1). 3All language representations came from the pretrained Longformer model (Beltagy et al., 2020). Model Training and Fine-Tuning Following Humeau et al. (2020); Cattan et al. (2021), i.a., we trained separate pairwise scorers P\u03b8,\u03b8\u2032: (AB, BA)\u2192S1, S2 on ECB+ and AIDA Phase 1. Here AB and BA are the 3,072D combined representations in A\u2192B and B\u2192A directions respectively, and \u03b8 and \u03b8\u2032 are the parameters of the pairwise scorer and the LLM, respectively. This output two scores for each directional encoding, each representing the probability that the event mention pair was coreferent.4 Thereafter, we used the CoVal Scorer (Moosavi et al., 2019) to form the final coreference clusters after applying transitive closure to identify the connected components with a threshold of 0.5 for all models. We used the same pairwise-scorer for all linear maps. For a direct multimodal comparison, we finetuned fused-modality models. We concatenated the image representations with the text representations and trained four separate pairwise scorers for each combination. Due to data sparsity of real images, we only trained fused models using generated event-centric images. Training took roughly 1.0 and 1.5 hours per epoch for the LLM and the fused models, respectively. For comparison, linear mapping took \u223c3s to learn a mapping between modalities. Fig. 2 shows log GPU seconds required for pairwise encoding for text, image, and fused modalities vs. bidirectional linear projection. Figure 2: Pairwise encoding time in GPU seconds (log-scale on y-axis) for text (Longformer), vision (ViT), and fused models vs. Bidirectional Linear Mapping (Lin-Sem) as a function of the number of train pairs in ECB+. 3.1. Categorizing Mention Pair Difficulty To empirically evaluate the contribution of crossmodal information toward resolving challenging event mention pairs, we used the gold-standard coreference labels to categorize unseen pairs at inference as easy or hard based on semantic and 4As a human reader would likely make a consistent coreference decision regardless of which event description she read first, we used the mean of the two scores as the final probability score for training and inference. \fFigure 3: Kernel Density Estimation plots of semantic-discourse similarity scores (including WuPalmer similarity) for mention pair difficulty categories in ECB+ (L) and AIDA Phase 1 (R), showing a clear demarcation of easy and hard pairs in positive and negative labels. easy_pos and hard_neg pairs have a high semantic similarity distribution while easy_neg and hard_pos pairs have lower semantic similarity distribution. discourse-level similarities. For semantic similarities, we use Wu-Palmer Similarity (Wu and Palmer, 1994), and cosine similarity metrics. For discourselevel similarities, metadata in both datasets provides information about within-topic and withindocument events which we used to score event similarities. For instance, an event pair within the same document and topic would get the highest discourse-level similarity score. These combined semantic and discourse similarity scores were then bucketed into easy and hard semantic transfer categories based on the means of coreferring and non-coreferring samples (see Fig. 3). An example \u201chard\u201d mention pair from ECB+, involving pronominal coreference, is (1) \u201cIn a move <m> that </m> will expand its services division, Hewlett-Packard will acquire EYP Mission Critical Facilities\u201d and (2) \u201cHP to <m> Acquire </m> Data Center Consultants.\u201d This categorization allowed us to identify cases where multimodal features are distinctly useful based on proportion of correctly resolved hard pairs (see Sec. 4). Table 7 in Appendix A shows examples of easy and hard pairs for coreferring and non-coreferring samples and their respective counts. Computation of Semantic Difficulty Categories It is important to note that the \u201chard\u201d and \u201ceasy\u201d categories include both positive (coreferent) and negative (non-coreferent) samples. These categories are computed based on the assumption that easier coreferent (easy positive) samples should ideally have a higher overall similarity than harder ones, both in terms of semantics and at the topic and discourse level. Similarly, easier non-coreferent samples (easy negative) should ideally have a lower overall similarity. Hard coreferent (hard positive) pairs have lower overall similarity and hard noncoreferent (hard negative) pairs have higher overall similarity when compared to easy pairs of the same label. Overall similarity for a given pair is computed as the sum of four individual scores: 1. whether a pair comes from the same topic (1 for within-topic, 0 for not), 2. whether a pair comes from the same document (1 for within-doc, 0 for not), 3. the Wu-Palmer similarity of the trigger tokens in a pair, and 4. the average cosine similarity of the vectors for the two sentences when encoded in both directions using the text-only, finetuned LLM (Longformer), inspired by (Ahmed et al., 2023). For computing the cosine similarity scores, we take two mention-containing sentences A and B and cross-encode sentence A in context before sentence B and sentence B in context after sentence A. We then take the cosine similarity between these two encoded vectors. The positions of A and B are then reversed and they are again encoded with cross-attention in the same way. Because crossattention is used, this results in different positional encodings for the two sentences and therefore a different cosine similarity value than the first calculation, so these values are then averaged for the final score. Adding the aforementioned four scores gives us the final similarity scores for each pair in each label category (positive and negative). If the final similarity score for an individual positive pair is more than the mean final similarity score for all positive pairs, such a pair is categorized as easy positive. If it is less than this value, it is categorized as hard positive. On the other hand, if the final similarity score for an individual negative pair is more than the mean final similarity score for all negative pairs, the pair is categorized as hard negative, and if it is less than this value, it is categorized as easy negative.5 The plots in Fig. 2 show the differences in the distributions of different sample categories vs. the calculated similarity scores for both the corpora. See Appendix A for more details with computed examples. We use the gold coreference labels to obtain the label categories. However, since this categorization is only used as an evaluation tool for the initial round of experiments and then frozen for the ensembling experiments, the difficulty category-related information is never used during model training. 5The average final similarity for all positive samples over the ECB+ corpus is 2.25, and the average final similarity for all negative samples is 2.14. We assume AIDA Phase 1 comes from a disparate distribution, and so we categorize the difficulty of pairs in it independently using the same procedure. \f4. Results and Analysis We evaluate using established coreference metrics (Moosavi et al., 2019), e.g., MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), CEAFe, and CoNLL F1 (the average of MUC, B3 and CEAFe F1) scores. 4.1. ECB+ We present results from Held et al. (2021) as a current, commonly accepted SOTA on ECB+, and from Ahmed et al. (2023), whose computationallyefficient pruning heuristic based on surface lemma similarity we follow to allow us to perform multiple experiments on a smaller compute budget. Direct comparison to text-only model (LLM) performance should be taken as a comparison to Ahmed et al. (2023) due to the preprocessing. Table 2 shows detailed results. Models MUC B3 CEAFe CoNLL Held et al. (2021) 87.5 86.6 82.9 85.7 Ahmed et al. (2023) 90.8 86.7 84.7 87.4 ViT-real\u2192LLM 6.9 63.1 55.1 41.7 BEiT-real\u2192LLM 87.3 80.3 76.7 81.4 SWIN-real\u2192LLM 87.6 79.7 76.5 81.3 CLIP-real\u2192LLM 24.7 66.3 57.5 49.5 LLM\u2192ViT-real 88.2 80.1 77.5 81.9 LLM\u2192BEiT-real 88.3 80.0 77.4 81.9 LLM\u2192SWIN-real 87.9 80.3 77.8 82.0 LLM\u2192CLIP-real 88.3 80.0 77.4 81.9 ViT-gen \u2295LLM 85.1 86.1 80.7 84.0 BEiT-gen \u2295LLM 82.2 84.9 78.1 81.7 SWIN-gen \u2295LLM 82.5 85.1 78.7 82.1 CLIP-gen \u2295LLM 89.3 84.2 82.6 85.4 ViT-gen\u2192LLM 77.4 78.8 71.5 75.9 BEiT-gen\u2192LLM 77.8 79.8 73.7 77.1 SWIN-gen\u2192LLM 79.5 79.6 73.4 77.5 CLIP-gen\u2192LLM 83.0 82.1 76.3 80.5 LLM\u2192ViT-gen 88.1 80.0 77.2 81.8 LLM\u2192BEiT-gen 88.3 80.0 77.4 81.9 LLM\u2192SWIN-gen 88.2 80.1 77.4 81.9 LLM\u2192CLIP-gen 88.3 80.0 77.4 81.9 Table 2: MM-CDCR F1 scores for MUC, B3, CEAFe and CoNLL on ECB+ test set, using LLM only, Lin-Sem (\u201c\u2192\u201d), and domain-fused finetuned versions (\u201c\u2295\u201d). Cited works are previous benchmarks on text-only CDCR. Bold indicates the best performer on each metric. \u201c-real\u201d indicates that the vision space was encoded with real images, while \u201c-gen\u201d indicates generated images. Text-only vs. Multimodal Models Despite the extra training time incurred in training a fusedmodality model with concatenated features (see Fig. 2), we see that the performance of the fused multimodal models does not exceed that of the text-only model (Longformer using Ahmed et al. (2023)\u2019s preprocessing heuristic). Interestingly, the performance gap between linearly-mapped systems and fused modality models is often quite small, despite the higher compute cost of training the fused model. For instance, LLM\u2192BEiT-gen and LLM\u2192BEiT-real (Longformer embeddings mapped into BEiT space) slightly best the CoNLL F1 score of BEiT-gen \u2295LLM, and BEiT-real\u2192LLM is only 0.5 F1 points lower. Similar trends hold when comparing other fused modality models and their linearly-mapped counterparts, such as LLM\u2192SWINgen, LLM\u2192SWIN-real, and SWIN-real\u2192LLM vs. SWIN-gen \u2295LLM. Semantic Transfer Categories In the coreferrence domain, one weakness of the CoNLL F1 metric is that specific evaluation metric-level details are obfuscated\u2014this can be seen in Table 3: although the aforementioned examples achieve comparable CoNLL F1 scores, the linear mappings achieve a much higher MUC and B3 recall, but lower precision, than the comparable fused models. Therefore, we do a proportional analysis of the correctly inferred (true positive) and misclassified (false positive and false negative) samples within the semantic transfer categories (see Table 4). These categorization labels were not used as supervision at any stage of training, fine-tuning, or mapping, and so an analysis of which models do better at which categories can illuminate different properties of the models, despite similar numerical performance. Table 4 shows the proportion of each result category per model, of samples that would be considered \u201chard\u201d according to the mention pair difficulty categorization described in Sec. 3. Models MUC B3 R P R P LLM\u2192ViT-gen 98.7 79.6 97.6 67.7 LLM\u2192BEiT-gen 99.1 79.6 97.9 67.7 ViT-gen \u2295LLM 80.9 89.7 85.4 86.9 BEiT-gen \u2295LLM 75.9 89.7 82.5 87.5 Table 3: MUC and B3 precision and recall comparison between linear mappings and comparable fused models. Within true positives (TP), linearly-mapped models, using both real and generated images, tended to correctly retrieve a higher proportion of hard pairs compared to the text-only and fused models. For instance, for generated images, the hard sample proportion retrieved by text-to-image models is almost 4 percentage points higher than that of text-only or fused models, while image-to-text models, though lower on average, still also correctly retrieve a higher proportion of hard pairs. This effect appears slightly more pronounced on average in the case of real images (avg. 51.8% hard pairs in TPs, compared to 50.1% for generated images, and 46.6% for text-only). \fSemantic Transfer Categories Models TP-Hard FP-Hard FN-Hard ECB+ Ahmed et al. (2023) 0.466 0.521 0.607 ViT-real\u2192LLM 0.625 0.250 0.506 BEiT-real\u2192LLM 0.521 0.436 0.434 SWIN-real\u2192LLM 0.510 0.451 0.407 CLIP-real\u2192LLM 0.476 0.536 0.508 LLM\u2192ViT-real 0.507 0.456 0.441 LLM\u2192BEiT-real 0.506 0.000 0.000 LLM\u2192SWIN-real 0.496 0.438 0.700 LLM\u2192CLIP-real 0.505 0.452 0.708 ViT-gen \u2295LLM 0.432 0.591 0.635 BEiT-gen \u2295LLM 0.437 0.606 0.584 SWIN-gen \u2295LLM 0.404 0.620 0.642 CLIP-gen \u2295LLM 0.477 0.506 0.729 ViT-gen\u2192LLM 0.487 0.472 0.521 BEiT-gen\u2192LLM 0.471 0.445 0.525 SWIN-gen\u2192LLM 0.548 0.433 0.478 CLIP-gen\u2192LLM 0.483 0.490 0.534 LLM\u2192ViT-gen 0.505 0.449 0.541 LLM\u2192BEiT-gen 0.506 0.451 0.000 LLM\u2192SWIN-gen 0.505 0.452 0.531 LLM\u2192CLIP-gen 0.506 0.451 0.632 AIDA Phase 1 LLM 0.561 0.385 0.695 ViT-real\u2192LLM 0.609 0.368 0.734 BEiT-real\u2192LLM 0.661 0.328 0.629 SWIN-real\u2192LLM 0.660 0.327 0.636 CLIP-real\u2192LLM 0.627 0.332 0.657 LLM\u2192ViT-real 0.643 0.346 0.929 LLM\u2192BEiT-real 0.638 0.352 0.749 LLM\u2192SWIN-real 0.667 0.333 0.562 LLM\u2192CLIP-real 0.648 0.341 0.000 Table 4: Table showing the proportion of hard event pairs within the true positive (TP), false positive (FP) and false negative (FN) samples based on semantic transfer category (Sec. 3) for ECB+. Values of 0 indicate that no cases fit this category, resulting in zero numerator. Ensembling Models The apparent facility of different models at correctly retrieving mention pairs of different semantic difficulties led to a question: since the mention pair difficulty was never used during training, fine-tuning, or mapping, and only as an analytic tool, could we split the mention pairs according to their difficulty, and use the different model types to handle mention pairs they on average appear to be better at? We therefore built an ensembling approach using the text-only model to handle easier pairs, and performed a grid-search through different combinations of the previously-trained multimodal models to handle harder pairs. We allowed for different multimodal models to potentially handle hard-positive pairs and hard-negative pairs and used the combined results from all models to compute the coreference metrics. Table 5 shows the best performing ensembles. Our best performing ensemble model used ViT-real\u2192LLM to handle hard negative pairs, LLM\u2192BEiT-real, to handle hard positive pairs, and the text-only language model to handle easy pairs. Models MUC B3 CEAFe CoNLL Held et al. (2021) 87.5 86.6 82.9 85.7 Ahmed et al. (2023) 90.8 86.7 84.7 87.4 ViT-gen \u2295LLM + LLM 89.1 86.5 84.8 86.8 BEiT-gen \u2295LLM + LLM 87.5 85.7 83.9 85.7 SWIN-gen \u2295LLM + LLM 87.5 85.9 83.8 85.7 CLIP-gen \u2295LLM + LLM 90.1 85.3 83.8 86.4 ViT-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 90.8 85.2 84.8 86.9 BEiT-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 91.3 85.5 86.5 87.8 SWIN-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 90.4 84.4 83.8 86.2 CLIP-gen\u2192LLM + LLM\u2192BEiT-gen + LLM 91.2 85.3 85.7 87.4 LLM\u2192ViT-gen + LLM\u2192BEiT-gen + LLM 88.7 82.3 79.4 83.5 LLM\u2192BEiT-gen + LLM 88.7 82.2 79.1 83.3 LLM\u2192SWIN-gen + LLM\u2192BEiT-gen + LLM 88.7 82.2 79.1 83.3 LLM\u2192CLIP-gen + LLM\u2192BEiT-gen + LLM 88.7 82.2 79.1 83.3 ViT-real\u2192LLM + LLM\u2192BEiT-real + LLM 94.5 89.5 91.8 91.9 BEiT-real\u2192LLM + LLM\u2192BEiT-real + LLM 88.9 82.4 79.7 83.7 SWIN-real\u2192LLM + LLM\u2192BEiT-real + LLM 88.7 82.2 79.1 83.3 CLIP-real\u2192LLM + LLM\u2192BEiT-real + LLM 94.3 89.3 91.6 91.7 LLM\u2192ViT-real + LLM\u2192BEiT-real + LLM 88.7 82.3 79.3 83.4 LLM\u2192BEiT-real + LLM 88.7 82.2 79.1 83.3 LLM\u2192SWIN-real + LLM\u2192BEiT-real + LLM 89.0 82.7 80.1 83.9 LLM\u2192CLIP-real + LLM\u2192BEiT-real + LLM 88.7 82.2 79.1 83.3 Table 5: MM-CDCR MUC, B3, CEAFe and CoNLL F1 results on ECB+ test set, using ensemble models. Format follows Table 2. Ensemble model names follow the format Hard-N model + Hard-P model + Easy pairs model. LLM was always used to handle Easy pairs. The best performing models for hard negative and hard positives were found using a grid search through different combinations of multimodal models. If only one model besides LLM is listed, that model was used to handle all Hard pairs. This resulted in a CoNLL F1 score of 91.9, with scores of 89.5 or higher across all components of MUC, B3, or CEAFe metrics, showing the ability of this ensemble to score highly on, and balance, multiple measurements. Other ensembles, such as a variant that used CLIP-real\u2192LLM to handle hard negatives, performed at a similar level. Two particularly interesting points emerge: 1) Using both real and generated images, LLM\u2192BEiT routinely performed best at handling hard positive pairs; 2) Many ensemble models using Lin-Sem, especially those using a V \u2192LLM mapping for hard negatives and an LLM \u2192V mapping for hard positives, outperform the fused model/text-only model ensembles, despite the simplicity of the linear transformation. This suggests that not only can visual information be leveraged for correct coreference of semantically more difficult mention pairs, but also that visual information may contain fine-grained cues useful for splitting mention pairs while linguistic information is more useful to cluster them. 4.2. AIDA Phase 1 Table 6 presents a novel baseline on the multimodal AIDA Phase 1 data. This data contains unique challenges, such as a train set that is smaller than the test data, and event descriptions from sources with conflicting perspectives, explicitly addressing the ambiguity and perspective conflict challenges from Sec. 1. Since this data comes with images mappable to individual event mentions, we evaluate \fusing only the provided images. As with ECB+, we find that models using linear mappings compete with or slightly outperform the text only model. Using the same proportional analysis of correct and misclassified samples by difficulty category, we find that linearly-mapped models are also more likely than the text-only to resolve hard pairs correctly on this dataset (avg. hard pairs in TPs: 63.9% for V \u2192LLM, 64.9% for LLM \u2192V, and 56.1% for text-only). We then applied the same ensembling approach to the AIDA data, using the same combination of linear mappings and the LLM according to the difficulty of the mention pair. Again we find that an ensemble model using a V \u2192LLM mapping for hard negatives and an LLM \u2192V mapping for hard positives performs best, although this time the model using CLIP-real\u2192LLM as the hard negative handler comes out on top. Models MUC B3 CEAFe CoNLL LLM 80.7 49.5 54.1 61.4 ViT-real\u2192LLM 85.9 38.4 52.7 59.0 BEiT-real\u2192LLM 85.7 42.6 57.9 62.1 SWIN-real\u2192LLM 82.9 46.4 55.8 61.7 CLIP-real\u2192LLM 78.5 52.4 53.5 61.5 LLM\u2192ViT-real 86.3 37.3 52.7 58.8 LLM\u2192BEiT-real 85.7 40.2 53.1 59.7 LLM\u2192SWIN-real 86.2 39.1 54.4 59.9 LLM\u2192CLIP-real 86.2 37.1 52.3 58.5 ViT-real\u2192LLM + LLM\u2192BEiT-real + LLM 86.2 39.6 54.4 60.1 BEiT-real\u2192LLM + LLM\u2192BEiT-real + LLM 87.1 42.1 60.4 63.2 SWIN-real\u2192LLM + LLM\u2192BEiT-real + LLM 87.1 42.5 60.5 63.4 CLIP-real\u2192LLM + LLM\u2192BEiT-real + LLM 87.1 43.8 62.8 64.6 LLM\u2192ViT-real + LLM\u2192BEiT-real + LLM 86.2 39.0 53.5 59.6 LLM\u2192BEiT-real + LLM 85.8 40.8 54.1 60.2 LLM\u2192SWIN-real + LLM\u2192BEiT-real + LLM 86.6 40.7 56.6 61.3 LLM\u2192CLIP-real + LLM\u2192BEiT-real + LLM 86.2 39.0 53.5 59.6 Table 6: MM-CDCR MUC, B3, CEAFe and CoNLL F1 results on AIDA Phase 1 Eval set. Format follows Tables 2 & 5. LLM denotes Longformer evaluated with Ahmed et al. (2023)\u2019s methodology. 5. Discussion Some specific example pairs where the text-only and fused models fail to link the pair, but ensembles correctly do so, expose certain features crucial for event coreference that are present in visual information and linearly transferable, but missing in text alone or scrambled during model fusion. ECB+ ECB+ examples of this kind include event pairs that require some sense of visual grounding, temporal logic (Schank and Abelson, 1975; Ravi et al., 2023) or pronominal context to resolve. For instance, pairs with pronominal antecedents and misleading lexical overlap like \u201c...dozens of others were seriously injured in the quakes, which also sent small tsunamis...\u201d and \u201c...injured in the earthquakes which rekindled bitter memories of similar deadly quakes...\u201d6 were missed by the LLM 6\u201c[E]arthquakes\u201d vs. \u201cquakes\u201d is misleading lexical overlap as they refer to different earthquakes. The actual A young girl was killed and dozens of others were seriously injured in the quakes , <m> which </m> also sent small tsunamis into Japan 's southeastern coast. Atururi said a 10-year-old girl was killed and at least 40 people were injured in the <m> earthquakes </m> , which rekindled bitter memories of similar deadly quakes that hit the town in 2002. Doctor Who has finally selected its 12th doctor : Peter Capaldi is officially set to replace exiting star Matt Smith as the TARDIS leader , producer Steven Moffat <m> announced </m> on the live BBC special Doctor Who Live : The Next Doctor Sunday.\u00a0 \u00a0Scottish actor best known for his role as Malcolm Tucker in The Thick of It <m> revealed </m> as 12th actor to play the Doctor. Figure 4: Sample coreferent event pairs from ECB+ that were correctly linked by our best multimodal ensemble (ViT-real\u2192LLM + LLM\u2192BEiT-real + LLM), but not by the text-only model. Event-triggers are highlighted in yellow and text in italics illustrates lexical ambiguity or misleading lexical overlap. and fused models. Visual cues, such as damaged buildings or injured people (either in images generated using mentions as prompts, or already present in images in news articles) can help make the link. The aforementioned example is shown in Fig. 4, and the images are generated according to the ECB+ augmentation methodology (Sec. 3). Also in Fig. 4, Steven Moffat and his appear to be ambiguously overlapping to the text-only model, which missed the event mentions that are actually about Peter Capaldi. The two facial images, which are real images associated with the event mentions, help make the link. AIDA Phase 1 Coreferent event mentions in the AIDA dataset are notable for conflicting information, and we find cases such as \u201cCalling people tell about people that are jumping out of the burning building.\u201d vs. \u201cFortytwo people trapped by a fire on the third floor of the stately, Soviet-era Trades Unions building burned, suffocated or jumped to their deaths.\u201d Text-only event triggers are underlined. \ffails to link ambiguous event triggers, but the images associated with each show the Trades Unions building in Odesa. In such context-sensitive pairs, the paired visual representations (image domain Arg1 and Arg2 in Fig. 1) in Lin-Sem help resolve the coreference by capturing less ambiguous information from images while the text-only pairwise scorer found low contextual similarity between the event triggers. Similarly, we see pairs with ambiguous context or pronominal anaphora, e.g., \u201cBuzina, 45, was shot dead\u201d vs. \u201cHe was murdered\u201d, are frequently missed by the LLM, but not by the ensemble models. In the case of this mention pair, both associated articles contain (different) pictures of the same individual, Oles Buzina, which, as with the ECB+ Peter Capaldi example, aids in the coreference7. Generally, for challenging corpora like AIDA Phase 1, we find visual features like faces, or background cues like angry protesters, press conferences, etc., act as cues for correctly resolving that pair. 6. Conclusion In this paper we have demonstrated the utility of multimodal information in cross-document event coreference. In particular, our results demonstrate that multimodal information is useful for resolving mention pairs whose triggers have low semantic and discourse-level similarity, rendering them difficult for text-only models. We developed a method (Lin-Sem) for using linear transformations between embedding spaces to transfer semantic information between vision and language representation spaces, and used this technique in a model ensembling approach that used Lin-Sem models to handle harder mention pairs and a text-only model for easier pairs. We applied this approach to the popular ECB+ benchmark and established a novel baseline on the challenging, and explicitly multimodal, AIDA Phase 1 dataset (Tracey et al., 2022). Our best performing models beat text-only performance on these datasets by \u223c3 F1 points and establish an upper bound on CDCR performance given the preprocessing used. Our ablation studies show that ensemble systems built upon our mention pair difficulty categories and using structure preserving linear maps can leverage event-specific visual cues to make correct coreference decisions about difficult mention pairs. These visual cues are of course absent in text only models, and are likely scrambled during standard multimodal fusion approaches. As such, our results present a strong case for the utility of multimodal information in NLU tasks like event coreference and argue for future increased development of such resources. Upon 7McNeely-White et al. (2022) present strong evidence for the particular effectiveness of linear transformation in face recognition. publication, we will release our processing pipeline and the generated/scraped images associated with ECB+.8 Our results should be considered in the context of our preprocessing assumptions. We use a computationally-efficient pruning heuristic that allowed us to run the high volume of experiments we showcased on a lower compute budget, while demonstrating the utility of multimodal features for coreference. Our binary semantic transfer categories (easy/hard) do not currently account for semantic similarity between pairs that cross subtopics since corpora like the ECB+ corpus do not contain coreference annotations across sub-topics (Bugert et al., 2021). However, our framework can be easily expanded to corpora like FCC (Bugert et al., 2020), with cross-subtopic events. 7. Future Work Future directions in this line of research include exploring the feasibility of using multimodal cues to align/enhance representation spaces of monolingual LLMs, like the English-only Longformer, for Russian and Ukrainian mention pairs in the AIDA Phase 1 corpus. Given the efficiency of linear transformations and the rarity of coreference-specific parallel corpora, this may help alleviate the compute budgets needed for multilingual LLM pretraining for CDCR. Another interesting direction is evaluating our method for other challenging CDCR datasets like FCC (Bugert et al., 2020) which contains cross-subtopic events or the GVC (Vossen et al., 2018) where the SOTA is lower compared to benchmarks like ECB+. Lastly, this work represents a novel cross-modal case where affine transformations between embedding spaces has been shown to be useful (cf. McNeely-White et al. (2022); Nath et al. (2022); Merullo et al. (2023); Ghaffari and Krishnaswamy (2023)). Future work in this area entails a theoretical exploration of the properties of embedding spaces with a goal of finding performance guarantees where affine transformations successfully preserve information for different AI tasks. Ethics Statement Our ablation studies required a non-trivial computation budget and concomitant resource usage, especially for the fused models with larger scoring heads on top of the LLM. Moreover, even though our LinSem framework is substantially compute-efficient, it still required cross-modal model encoding in generating representations for deploying our linear maps between them. The images generated for this task 8The AIDA Phase 1 data must be properly obtained from the Linguistic Data Consortium. \fwith diffusion models might reflect social, racial, or gender-based stereotypes as are commonly seen in large generative models. Due to the nature of the AIDA Phase 1 data\u2019s focus on Ukrainian-Russian conflict, the events described therein are likely to be distressing to some. Acknowledgements This research was supported in part by grant award FA8750-18-2-0016 from the U.S. Defense Advanced Research Projects Agency (DARPA) to Colorado State University and the University of Colorado, and by a subcontract to the University of Colorado on grant award FA8750-19-2-1004 from DARPA. Views expressed herein do not reflect the policy or position of the Department of Defense or the U.S. Government. All errors are the responsibility of the authors. Bibliographical" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.06481v1", |
| "title": "GeoDirDock: Guiding Docking Along Geodesic Paths", |
| "abstract": "This work introduces GeoDirDock (GDD), a novel approach to molecular docking\nthat enhances the accuracy and physical plausibility of ligand docking\npredictions. GDD guides the denoising process of a diffusion model along\ngeodesic paths within multiple spaces representing translational, rotational,\nand torsional degrees of freedom. Our method leverages expert knowledge to\ndirect the generative modeling process, specifically targeting desired\nprotein-ligand interaction regions. We demonstrate that GDD significantly\noutperforms existing blind docking methods in terms of RMSD accuracy and\nphysicochemical pose realism. Our results indicate that incorporating domain\nexpertise into the diffusion process leads to more biologically relevant\ndocking predictions. Additionally, we explore the potential of GDD for lead\noptimization in drug discovery through angle transfer in maximal common\nsubstructure (MCS) docking, showcasing its capability to predict ligand\norientations for chemically similar compounds accurately.", |
| "authors": "Ra\u00fal Mi\u00f1\u00e1n, Javier Gallardo, \u00c1lvaro Ciudad, Alexis Molina", |
| "published": "2024-04-09", |
| "updated": "2024-04-09", |
| "primary_cat": "q-bio.BM", |
| "cats": [ |
| "q-bio.BM", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Diffusion AND Model", |
| "gt": "GeoDirDock: Guiding Docking Along Geodesic Paths", |
| "main_content": "Introduction In drug discovery, molecular docking is pivotal for probing interactions between molecules and specific protein cavities, also known as binding pockets (Meng et al., 2011). These regions on protein surfaces are targets for docking algorithms, aiming to establish the most stable configurations of ligands, or ligand poses, for assessing potential drug-receptor interactions. Docking traditionally relies on empirical data from in vitro experiments, such as crystal structures and molecular dynamics simulations, to locate these cavities for molecular binding assessment. Renowned methods like Glide, Vina, and rDock leverage physics-based scoring and a rich database of molecular interactions to navigate this process (Friesner et al., 2004; Trott & Olson, 2010; Ruiz-Carmona et al., 2014). While traditional methods have long been a cornerstone in computational chemistry, the field experienced a paradigm shift with DiffDock (Corso et al., 2023). This seminal contribution postulated molecular docking as a generative task, marking a significant shift through the development of a diffusion method that operates, without a prior over the binding site location, across the various degrees of freedom inherent in molecular docking. However, limitations appear during prospective applications, due to its blind docking strategy. Directed docking, which employs expert knowledge or pocket prediction to identify binding regions, has demonstrated superior performance compared to blind docking (Ghersi & Sanchez, 2009). Additionally, the use of ligand root mean squared distance (RMSD) as the sole performance metric has been contested for not fully capturing ligand position accuracy, raising questions about the effectiveness of recent deep learning innovations in this field. (Buttenschoen et al., 2023; Yu et al., 2023). In this work, we introduce GeoDirDock, a diffusion method that integrates expert knowledge into the diffusion process over translations, rotations, and torsions, to direct the diffusion \u2217Work done during an internship at Nostrum Biodiscovery. \u2020Author has an affiliation with Barcelona Supercomputing Center. 1 arXiv:2404.06481v1 [q-bio.BM] 9 Apr 2024 \fPublished at the GEM workshop, ICLR 2024 process towards a desired region of the protein structure. Our results show that this directed docking approach significantly improves upon blind docking methods, achieving close-toground truth RMSD conformations in self-docking scenarios and demonstrating enhanced performance in both RMSD accuracy and physicochemical pose plausibility. Moreover, we validate the robustness of our approach through a maximal common substructure docking test, highlighting its effective generalization across diverse and previously unseen ligand chemistries. 2 Related work Traditional Prior-Informed Docking Methods. Traditional docking evaluates how likely molecules are to bind to specific areas of proteins, mainly focusing on binding pockets. These methods capitalize on data from crystal structures, literature, and molecular dynamics simulations to set pose search parameters and find the most suitable docking positions, using physics-based scoring methods to measure molecular interactions (Meng et al., 2011; Koes et al., 2013; McNutt et al., 2021). However, the effectiveness of these methods largely relies on having access to high-quality data, which can be a limitation in cases where such data is scarce. Diffusion-Based Molecular Modeling. Recent advancements in diffusion models have relied upon the concept of blind docking, which does not rely on pre-existing receptorligand binding information. DiffDock, for example, replaces traditional methods with a denoising process that manipulates translations, rotations, and torsion angles in a T3 \u00d7 SO(3) \u00d7 SO(2)m diffusion space, evaluated using a confidence-scoring network. (Nakata et al., 2023) approached the diffusion problem in R3, incorporating equivariant constraints across the entire space and employing an equivariant graph neural network for scoring. (Qiao et al., 2022) introduced contact prediction modules to direct the molecular diffusion process through the R3 manifold, assessing sample plausibility with invariant point attention. Informed Diffusion for Directed Docking. Most existing diffusion-based methods, including the ones discussed above, do not explicitly consider the precise location of the binding cavity. Despite the technical advancements in blind diffusion docking, these methods have not yet achieved the same level of effectiveness as traditional approaches. This gap has opened opportunities for developing methods that incorporate elements of traditional docking. To the best of our knowledge, the only method integrating the T3\u00d7SO(3)\u00d7SO(2)m diffusion process with targeted guidance to specific protein surface points is DiffDock-Pocket1 (Plainer et al., 2023). This method not only adopts a more traditional approach to docking but also extends it by allowing flexibility in side chains. 3 Methods Our approach builds on the concept prevalent in traditional docking strategies, where specific geometric shapes like spheres or boxes demarcate potential binding areas, excluding the remainder of the protein structure. We refine this concept in DiffDock by directing incremental updates during the denoising procedure toward a targeted binding region and conformation. We utilize a guided diffusion strategy, drawing inspiration from Dhariwal & Nichol (2021), by introducing a guiding vector Vguide in place of a conventional trained classifier. This vector integrates domain expertise into the diffusion process by altering the update mechanism. The intensity of these alterations is controlled by the hyperparameter \u03b3, allowing for dynamic adjustments based on the proximity and direct route to the target regions. This method ensures that the direction and magnitude of Vguide are precisely aligned with the distance and shortest path to the designated areas, optimizing the diffusion trajectory towards the desired binding site. Vupdate = (1 \u2212\u03b3)VDiffDock + \u03b3Vguide (1) 1Referred to as DD-Pocket for the remainder of the manuscript. 2 \fPublished at the GEM workshop, ICLR 2024 where Vguide = \u03b1 \u00b7 vdir (2) being vdir the vector tangent to the shortest path and \u03b1 the distance towards the selected center or boundary region. For a detailed analysis of the range of possible \u03b3 values, we refer the reader to Appendix D. Although implementing this method in R3 would be straightforward, DiffDock operates in a more complex space, P = T3 \u00d7SO(3)\u00d7SO(2)m, to accurately reflect the degrees of freedom involved in molecular docking. Consequently, we calculate the shortest paths and distances within each component of this product space, effectively tailoring the diffusion process to this task. 3.1 Boundaries and Geodesics Shortest paths in T3 can easily be determined by leveraging straight lines and the geometry of a hypothetical sphere to integrate binding pocket information effectively. This approach helps mitigate the risk of selecting incorrect docking pockets, a prevalent issue in blind docking strategies that often leads to out-of-distribution errors (Ghersi & Sanchez, 2009). The incorporation of this spherical model is instrumental in refining the selection process, enhancing the algorithm\u2019s ability to direct the denoising process to given docking sites. However, the challenge of ensuring the physical realism of docking poses\u2014highlighted by concerns over steric clashes and distorted bond angles (Buttenschoen et al., 2023)\u2014necessitates further refinement of our model. To address these issues, our method extends to the SO(3) and SO(2)m spaces, where we employ geodesics to define the shortest paths. This approach is critical for accurately guiding our diffusion updates with Vguide vectors, ensuring that the docking poses are not only theoretically viable but also physically plausible. For an in-depth explanation of how Vguide vectors are defined and integrated into the diffusion process, please refer to Appendix A. 3.2 Soft Constraints To balance the precision and flexibility of DiffDock, we implement measures to modulate the influence of guidance vectors, recognizing their potential to alter the model\u2019s learned dynamics and impact performance within designated spatial regions. We introduce a dynamic adjustment of the guidance vector\u2019s influence through a decreasing sigmoid schedule for the \u03b3 parameter, progressively diminishing its impact through the diffusion process. This ensures the initial guidance by expert knowledge gradually cedes to the model\u2019s intrinsic optimization capabilities. Moreover, we enable the consideration of multiple potential regions within the SO(3) and SO(2)m spaces, guiding updates towards the nearest boundary. Once within a targeted region, DiffDock is allowed autonomy in refining rotation and torsion angles, unencumbered by external guidance. This dual-phase optimization strategy\u2014global optimization guided by expert knowledge in early steps, followed by local, autonomous refinement by DiffDock\u2014ensures a balance between adherence to biologically plausible paths and the discovery of optimal docking configurations. 3.3 Evaluation To facilitate high-throughput evaluation without direct expert input for each sample, we employ a fuzzing strategy, adjusting true labels to simulate expert knowledge. This involves creating guidance spheres for translation with a 7\u02da A radius around the ligand\u2019s actual center, directing vectors toward these spheres\u2019 peripheries, and ceasing guidance upon entrance. For rotation and torsion, we define regions around true angles, adjusted by a fuzzing factor \u03b7 of 0.15, guiding vectors towards these adjusted regions and turning off the guidance when they enter. This fuzzing approach, robust across radius and \u03b7 settings, allows for an effective approximation of expert-directed docking, as detailed in Appendix C. Moreover, we perform an ablation test of different initial gamma values in Appendix D and test the generalization capabilities of our algorithm in Appendix E. 3 \fPublished at the GEM workshop, ICLR 2024 4 Experiments To assess our approach, we employ the testing set from PDBBind proposed in St\u00a8 ark et al. (2022), applying a three-fold evaluation strategy. Firstly, we analyze docking pose RMSD, focusing on the Top1 and Top5 poses by confidence for both Apo and Holo structures, and introduce Mean Square Error (MSE) as an additional metric to evaluate errors in rotation states and torsion angles, detailed in Appendix B. Secondly, we examine the physical plausibility of these docking poses using the Posebusters suite (Buttenschoen et al., 2023), providing insights into the realism of the predicted conformations. Lastly, we conduct an angle transfer test employing maximal common substructure docking to assess our method\u2019s ability to generalize across different molecular configurations. 4.1 Evaluation of docking poses We evaluated GeoDirDock (GDD) against DiffDock across various setups by comparing the RMSD of generated poses to crystal structures and examining the convergence based on the number of denoising steps taken. Initially, we tested GDD with translation guidance only (GDD-TR), mimicking conventional docking\u2019s focus on specific geometric volumes. Results showed GDD-TR surpassing DiffDock across all metrics, achieving lower RMSD values in fewer steps and maintaining performance across both Apo and Holo receptor configurations, suggesting that targeted guidance enhances docking accuracy and efficiency (Table 1). Table 1: RMSD performance comparison for Holo and Apo settings across docking methods, showing Top-1 and Top-5 prediction accuracies. Best performances are highlighted in bold. Best performances with only translation guidance are stated in italic. The number of denoising steps and the number of samples generated are stated as (steps-samples). Results marked with an asterisk (*) were obtained from Plainer et al. (2023) Holo Apo Top-1 RMSD Top-5 RMSD Top-1 RMSD Top-5 RMSD %<2 Med %<2 Med %<2 Med %<2 Med SMINA* (rigid) 32.5 4.5 46.4 2.2 6.6 7.7 15.7 5.6 GNINA* (rigid) 42.7 2.5 55.3 1.8 9.7 7.5 19.1 5.2 DiffDock (10-10) 34.19 3.53 40.17 2.44 27.14 4.62 36.28 3.09 DiffDock (20-10) 37.08 3.50 44.66 2.60 27.51 4.56 38.40 2.85 DiffDock (20-40) 38.27 3.12 46.65 2.15 27.01 4.85 37.93 3.22 DD-Pocket* (20-10) 47.7 2.1 56.3 1.8 41.0 2.6 47.6 2.2 DD-Pocket* (20-40) 49.8 2.0 59.3 1.7 41.7 2.6 47.8 2.1 GDD-TR (10-10) 41.62 2.69 48.88 2.03 31.71 3.62 42.00 2.50 GDD-TR (20-10) 44.97 2.37 50.56 1.94 34.29 3.21 44.00 2.27 GDD-TR (20-40) 48.88 2.05 57.82 1.70 34.38 3.33 47.28 2.11 GDD-Full (10-10) 63.97 1.52 67.60 1.31 51.29 1.95 58.74 1.60 GDD-Full (20-10) 68.44 1.24 70.11 1.16 59.43 1.60 65.14 1.43 GDD-Full (20-40) 68.72 1.22 71.51 1.14 58.86 1.59 63.71 1.38 Upon extending guidance to include torsions and rotations, imposing a more substantial prior than translation alone, we anticipated improvements in the reconstruction of crystallographic structures. GDD-Full, informed across all three spaces, markedly outperformed both GDD-TR and DiffDock, recovering approximately 68% of poses within 2\u02da A of the reference, significantly higher than with translation-only guidance. This comprehensive approach not only improved pose accuracy but also maintained the trend of achieving optimal results in fewer steps, highlighting the efficacy of our method in enhancing docking precision and computational efficiency. Detailed MSE analysis for torsion and rotation are available in Appendix B. 4 \fPublished at the GEM workshop, ICLR 2024 Table 2: Assessment of Posebusters scores for Holo and Apo scenarios, comparing docking structure and re-docking accuracies across DiffDock and GDD methods. Best performances are highlighted in bold. The number of denoising steps and the number of samples generated are stated as (steps-samples). Results marked with an asterisk (*) were obtained from Plainer et al. (2023) Holo Apo Docking Structure Re-docking Docking Structure Re-docking DiffDock (10-10) 23.13 15.31 7.06 4.12 DiffDock (20-10) 25.84 16.11 7.43 4.0 DiffDock (20-40) 26.67 16.0 14.45 4.05 DD-Pocket (20-40)* 29.4 17.4 21.6 10.9 GDD-TR (10-10) 23.00 15.33 3.43 1.71 GDD-TR (20-10) 20.00 15.33 10.29 6.29 GDD-TR (20-40) 22.00 14.33 9.14 5.14 GDD-Full (10-10) 29.33 24.33 8.62 6.90 GDD-Full (20-10) 28.00 24.00 12.0 10.86 GDD-Full (20-40) 30.67 26.67 11.43 9.71 4.2 Physical plausibility of informed diffusion poses The Posebusters evaluation (Table 2) reveals that GDD-Full outperforms GDD-TR and DiffDock in Holo structures, demonstrating its superior accuracy in docking and re-docking tasks. This success highlights GDD-Full\u2019s ability to align with the physical aspects of molecular docking, attributed to its comprehensive guidance across multiple spaces. In contrast, GDD-TR suffers a performance degradation in both Holo and Apo structures. This aligns with our hypothesis of full guidance being needed to reproduce physically plausible poses, as the main objective of translational guidance is the correct selection of the binding pocket, not the increment of physical plausibility of poses. 4.3 Angle transfer as maximal common substructure docking In the lead optimization phase of drug discovery, template-based modeling is crucial for assessing binding affinity changes among chemically similar compounds (Raman, 2019). This approach involves using a single compound as a reference template to maintain consistent a shared topology while optimizing the distinct elements of each molecule. To evaluate our method\u2019s capabilities beyond self-docking and informed directed docking, we decided to benchmark GDD as a potential tool for template-based modeling. For this we selected crystallized BACE structures from the D3R Grand Challenge 4 (Parks et al., 2020), employing previously identified templates for angle transfer. We conducted an MCS search to transfer torsion angles associated with heavy atoms common between the template and target molecules, leaving other angles uninformed. This focused evaluation, termed GDD-Tor, exclusively examines the impact of torsion angle transfer, deliberately omitting guidance in other spatial dimensions to isolate the effects of this specific mechanism. Table 3: Comparison of MCS Docking performance, showing RMSD and MSE metrics for Top-1 and Top-5 predictions between DiffDock and GDD-Tor methods. The number of denoising steps and the number of samples generated are stated as (steps-samples). Top-1 RMSD Top-5 RMSD Top-1 MSE Top-5 MSE %<2 Med %<2 Med Avg Med Avg Med DiffDock (20-40) 50.0 1.93 55.0 1.71 2.33 2.30 1.37 1.45 GDD-Tor (20-40) 63.2 1.76 63.2 1.69 1.79 1.35 1.31 1.21 5 \fPublished at the GEM workshop, ICLR 2024 As we can determine by the results in Table 3, this angle transfer already marks an improvement both in RMSD and torsion MSE. These preliminary results outline great potential for our method and we look to further develop this application and corresponding benchmarks. 5 Conclusions and Future Work In this study, we introduce GeoDirDock, a novel framework that enhances generative docking models through geodesic guidance. GeoDirDock significantly improves docking precision and boosts the physical plausibility of results compared to traditional blind docking methods like DiffDock. By incorporating guidance across translation, torsion, and rotation dimensions, GeoDirDock outperforms both DiffDock and its translation-only variant, achieving precise docking poses with fewer computational steps. These results pose guided docking as a promising avenue for AI-enabled molecular docking, ensuring both efficiency and accuracy in molecular docking. Future work would focus on improving in generalizability to completely unseen ligand and protein chemistries, thus, postulating GeoDirDock as a valuable tool for prospective docking campaigns. We are also keen on extending this algorithm towards including protein flexibility in the docking procedure, aiming for a more realistic setting of the docking protocol. The inclusion of more realistic priors, in both backbone and side chain angles, would solve possible steric clashes between ligands and proteins, performing implicit induced fit docking." |
| } |
| ] |
| } |